MongoDB Fetching documents slow (Indexing used)

后端 未结 2 1451
一向
一向 2021-01-13 04:25

The FETCH-stage is the limiting factor in my queries. Ive been reaserching and it seems that mongodb is reading much more than it needs, and not utilize the bandwidth fully.

相关标签:
2条回答
  • 2021-01-13 05:16

    The main facts are:

    • The machine has 16 GB of RAM
    • The collection in question is 112 GB uncompressed (~51 GB compressed)
    • The collection's index total size is ~7 GB
    • The collection contains 367,614,513 documents
    • The bulk of time is spent fetching documents for projection. This takes 166470 ms (~166 seconds). Index scan for the query takes only 980 ms (<1 second).

    Assuming WiredTiger cache is set to default, the amount of RAM reserved for the WiredTiger cache should be approximately 8.6 GB. In https://docs.mongodb.com/v3.2/faq/storage/#to-what-size-should-i-set-the-wiredtiger-internal-cache:

    Starting in MongoDB 3.2, the WiredTiger internal cache, by default, will use the larger of either:

    • 60% of RAM minus 1 GB, or
    • 1 GB.

    From the information above, it appears that there is a memory pressure on your machine. MongoDB tries to keep indexes in memory for fast access, and the whole index is ~7 GB. This will effectively fill ~80% of your WiredTiger cache with just the index, leaving little space for anything else. As a result, MongoDB is forced to pull the documents in the result set from disk. At this point, performance suffers.

    You can see the effect of this from the iostat output, where the device xvdf (where the data resides) hitting more than 94% utilization (shown in the %util column), which means that your operation is I/O bound as you don't have enough RAM to satisfy your ideal working set.

    To mitigate this issue, you could try to:

    • Provision a larger RAM for your deployment
    • If applicable, use a cursor to return the documents instead of trying to access the whole result set all at once

    You could also review the Production Notes and the Operation Checklist for recommended settings.

    0 讨论(0)
  • 2021-01-13 05:20

    I encountered the same problem when I was fetching around 35000 documents. To solve it, I used the aggregate function (sakulstra:aggregate) and in my case it has incredibly boosted the request. The result format is obviously not the same, but it's still easy to use to compute all things I need.

    Before (7000ms) :

    const historicalAssetAttributes = HistoricalAssetAttributes.find({
            date:{'$gte':startDate,'$lte':endDate},
            assetId: {$in: assetIds}
        }, {
            fields:{
                "date":1,
                "assetId":1,
                "close":1
            }
        }).fetch();
    

    After (300ms):

    const historicalAssetAttributes = HistoricalAssetAttributes.aggregate([
            {
                '$match': {
                    date: {'$gte': startDate, '$lte': endDate},
                    assetId: {$in: assetIds}
                }
            }, {
                '$group':{
                    _id: {assetId: "$assetId"},
                    close: {
                        '$push': {
                            date: "$date",
                            value: "$close"
                        }
                    }
                }
            }
        ]);
    
    0 讨论(0)
提交回复
热议问题