What can be done with Mongo Aggregation / Performance of Mongo Aggregation

后端 未结 2 711
野趣味
野趣味 2021-01-12 06:31

I built a MongoDB. I want to do aggregation by certain grouping. I found this document, which will do that for me. Everything is ok, but certain limitations are pointed out:

相关标签:
2条回答
  • 2021-01-12 06:54

    I got valid and helpful answer from google groups. Would like to share with you all.

    The limitation is not on the number of documents: the limitation is on the amount of memory used by the final result (or an intermediate result).

    So: if you aggregate 200 000 documents but the result fits into the 16MB result, then you're fine. If you aggregate 100 documents and the result does not fit into 16 MB, then you'll get an error.

    Similarly, if you do a sort() or a group() on an intermediate result, and that operation needs more than 10% of available RAM, then you'll get an error. This is only loosely related to how many documents you have: it's a function of how big the particular stage of the pipeline is.

    Can i increase 16MB via any settings?

    Is 16MB limitation only for End-Result OR Is that for that particular aggregation (means, intermediate results + any temporary holdings + End result)?

    The 16MB limit is not adjustable. This is the maximum size of a document in MongoDB. Since the Aggregation framework is currently implemented as a command, the result from the aggregation must be returned in a single document: hence the 16 MB limit.

    see this post

    0 讨论(0)
  • 2021-01-12 07:03

    The amount of processing that can occur with the aggregation framework depends on your schema.

    The aggregation framework can only output the relative of one document at the moment (for larger output you will want to watch: https://jira.mongodb.org/browse/SERVER-3253 ) and it will output in the form of:

    {
        result: { //the result },
        ok: 1/0
    }
    

    So you have to make sure that what you get back out of your $group/$project is not so big that you don't get back the results you need. Most of the time this will not be the case and a simple $group even on millions of rows can result in a response of smaller than 16Meg.

    We have no idea of the size of your documents or the aggregative queries you wish to run as such we cannot advise nout.

    If any single aggregation operation consumes more than 10 percent of system RAM the operation will produce an error.

    That is pretty self explanatory really. If the working set for an operation is so large that it takes more than 10 percent RAM ($group/Computed fields/$sort on computed or grouped fields) then it will not work.

    Unless you try and misuse the aggregation framework to do your app logic for you then you should never really run into this problem.

    The aggregation system currently stores $group operations in memory, which may cause problems when processing a larger number of groups.

    Since $group is really hard to not do in memory (it "groupes" the field) this means that operations on that group are also in memory, i.e. $sort this is where you can start to use up that 10% if your not careful.

    0 讨论(0)
提交回复
热议问题