Find largest document size in MongoDB

前端 未结 7 477
耶瑟儿~
耶瑟儿~ 2020-12-01 01:33

Is it possible to find the largest document size in MongoDB?

db.collection.stats() shows average size, which is not really representative because in my

相关标签:
7条回答
  • 2020-12-01 01:43

    Note: this will attempt to store the whole result set in memory (from .toArray) . Careful on big data sets. Do not use in production! Abishek's answer has the advantage of working over a cursor instead of across an in memory array.

    If you also want the _id, try this. Given a collection called "requests" :

    // Creates a sorted list, then takes the max
    db.requests.find().toArray().map(function(request) { return {size:Object.bsonsize(request), _id:request._id}; }).sort(function(a, b) { return a.size-b.size; }).pop();
    
    // { "size" : 3333, "_id" : "someUniqueIdHere" }
    
    0 讨论(0)
  • 2020-12-01 01:44

    Inspired by Elad Nana's package, but usable in a MongoDB console :

    function biggest(collection, limit=100, sort_delta=100) {
      var documents = [];
      cursor = collection.find().readPref("nearest");
      while (cursor.hasNext()) {
        var doc = cursor.next();
        var size = Object.bsonsize(doc);
        if (documents.length < limit || size > documents[limit-1].size) {
          documents.push({ id: doc._id.toString(), size: size });
        }
        if (documents.length > (limit + sort_delta) || !cursor.hasNext()) {
          documents.sort(function (first, second) {
            return second.size - first.size;
          });
          documents = documents.slice(0, limit);
        }
      }
      return documents;
    }; biggest(db.collection)
    
    • Uses cursor
    • Gives a list of the limit biggest documents, not just the biggest
    • Sort & cut output list to limit every sort_delta
    • Use nearest as read preference (you might also want to use rs.slaveOk() on the connection to be able to list collections if you're on a slave node)
    0 讨论(0)
  • 2020-12-01 01:56

    Well.. this is an old question.. but - I thought to share my cent about it

    My approach - use Mongo mapReduce function

    First - let's get the size for each document

    db.myColection.mapReduce
    (
       function() { emit(this._id, Object.bsonsize(this)) }, // map the result to be an id / size pair for each document
       function(key, val) { return val }, // val = document size value (single value for each document)
       { 
           query: {}, // query all documents
           out: { inline: 1 } // just return result (don't create a new collection for it)
       } 
    )
    

    This will return all documents sizes although it worth mentioning that saving it as a collection is a better approach (the result is an array of results inside the result field)

    Second - let's get the max size of document by manipulating this query

    db.metadata.mapReduce
    (
        function() { emit(0, Object.bsonsize(this))}, // mapping a fake id (0) and use the document size as value
        function(key, vals) { return Math.max.apply(Math, vals) }, // use Math.max function to get max value from vals (each val = document size)
        { query: {}, out: { inline: 1 } } // same as first example
    )
    

    Which will provide you a single result with value equals to the max document size

    In short:

    you may want to use the first example and save its output as a collection (change out option to the name of collection you want) and applying further aggregations on it (max size, min size, etc.)

    -OR-

    you may want to use a single query (the second option) for getting a single stat (min, max, avg, etc.)

    0 讨论(0)
  • 2020-12-01 01:57

    You can use a small shell script to get this value.

    Note: this will perform a full table scan, which will be slow on large collections.

    let max = 0, id = null;
    db.test.find().forEach(doc => {
        const size = Object.bsonsize(doc); 
        if(size > max) {
            max = size;
            id = doc._id;
        } 
    });
    print(id, max);
    
    0 讨论(0)
  • 2020-12-01 01:57

    Starting Mongo 4.4, the new aggregation operator $bsonSize returns the size in bytes of a given document when encoded as BSON.

    Thus, in order to find the bson size of the document whose size is the biggest:

    // { "_id" : ObjectId("5e6abb2893c609b43d95a985"), "a" : 1, "b" : "hello" }
    // { "_id" : ObjectId("5e6abb2893c609b43d95a986"), "c" : 1000, "a" : "world" }
    // { "_id" : ObjectId("5e6abb2893c609b43d95a987"), "d" : 2 }
    db.collection.aggregate([
      { $group: {
        _id: null,
        max: { $max: { $bsonSize: "$$ROOT" } }
      }}
    ])
    // { "_id" : null, "max" : 46 }
    

    This:

    • $groups all items together
    • $projects the $max of documents' $bsonSize
    • $$ROOT represents the current document for which we get the bsonsize
    0 讨论(0)
  • 2020-12-01 02:02

    Finding the largest documents in a MongoDB collection can be ~100x faster than the other answers using the aggregation framework and a tiny bit of knowledge about the documents in the collection. Also, you'll get the results in seconds, vs. minutes with the other approaches (forEach, or worse, getting all documents to the client).

    You need to know which field(s) in your document might be the largest ones - which you almost always will know. There are only two practical1 MongoDB types that can have variable sizes:

    • arrays
    • strings

    The aggregation framework can calculate the length of each. Note that you won't get the size in bytes for arrays, but the length in elements. However, what matters more typically is which the outlier documents are, not exactly how many bytes they take.

    Here's how it's done for arrays. As an example, let's say we have a collections of users in a social network and we suspect the array friends.ids might be very large (in practice you should probably keep a separate field like friendsCount in sync with the array, but for the sake of example, we'll assume that's not available):

    db.users.aggregate([
        { $match: {
            'friends.ids': { $exists: true }
        }},
        { $project: { 
            sizeLargestField: { $size: '$friends.ids' } 
        }},
        { $sort: {
            sizeLargestField: -1
        }},
    ])
    

    The key is to use the $size aggregation pipeline operator. It only works on arrays though, so what about text fields? We can use the $strLenBytes operator. Let's say we suspect the bio field might also be very large:

    db.users.aggregate([
        { $match: {
            bio: { $exists: true }
        }},
        { $project: { 
            sizeLargestField: { $strLenBytes: '$bio' } 
        }},
        { $sort: {
            sizeLargestField: -1
        }},
    ])
    

    You can also combine $size and $strLenBytes using $sum to calculate the size of multiple fields. In the vast majority of cases, 20% of the fields will take up 80% of the size (if not 10/90 or even 1/99), and large fields must be either strings or arrays.


    1 Technically, the rarely used binData type can also have variable size.

    0 讨论(0)
提交回复
热议问题