I\'m running on Mongo 3.6.6 (on a small Mongo Atlas cluster, not sharded) using the native Node JS driver (v. 3.0.10)
My code looks like this:
const
First of all, if your data is too big it's not a good idea to use toArray() method, instead it's better to use forEach() and loop throw the data. Just like this :
const records = await collection.find({
userId: ObjectId(userId),
status: 'completed',
lastUpdated: {
$exists: true,
$gte: '2018-06-10T21:24:12.000Z'
}
});
records.forEach((record) => {
//do somthing ...
});
Second, you can use {allowDiskUse: true} option for getting large data.
const records = await collection.find({
userId: ObjectId(userId),
status: 'completed',
lastUpdated: {
$exists: true,
$gte: '2018-06-10T21:24:12.000Z'
}
},
{allowDiskUse: true});
You can try these 3 things:
a) Set the cursor to false
db.collection.find().noCursorTimeout();
You must close the cursor at some point with cursor.close();
b) Or reduce the batch size
db.inventory.find().batchSize(10);
c) Retry when the cursor expires:
let processed = 0;
let updated = 0;
while(true) {
const cursor = db.snapshots.find().sort({ _id: 1 }).skip(processed);
try {
while (cursor.hasNext()) {
const doc = cursor.next();
++processed;
if (doc.stream && doc.roundedDate && !doc.sid) {
db.snapshots.update({
_id: doc._id
}, { $set: {
sid: `${ doc.stream.valueOf() }-${ doc.roundedDate }`
}});
++updated;
}
}
break; // Done processing all, exit outer loop
} catch (err) {
if (err.code !== 43) {
// Something else than a timeout went wrong. Abort loop.
throw err;
}
}
}