The result:
If you are operating on a dataset that is fault tolerant, or doing a one time process you can verify, changing WriteAcknowledge
"There is not a substantial read rate on the database so Sharding would not improve matters, although perhaps I am wrong."
An update involves a read. aka finding that forsaken _id -- so perhaps sharding might be helpful if not v helpful
You can try to modify the Write concern levels. Obviously there is a risk on this, as you wouldn't be able to catch any writing error, but at least you should still be able to capture network errors. As MongoDB groups the bulk insert operations in groups of 1000, this should speed up the process.
W by default is 1:
When you change it to 0:
If you are not concern about the order of elements, you can gain some speed calling the unordered bulk operation
await m_Collection.BulkWriteAsync(updates, new BulkWriteOptions() { IsOrdered = false });
With an unordered operations list, MongoDB can execute in parallel the write operations in the list and in any order. Link
Marked answer here is good. I want to add an additional code to help others who use InsertMany
instead of BulkWriteAsync
to take benefit of IsOrdered = false
quicker
m_Collection.InsertMany(listOfDocument, new InsertManyOptions() { IsOrdered = false });
We switched to Cassandra because Mongo doesn't scale well. If you say that after 80M you saw a performance degradation, easily it is related on memory. I'm more expert on SQL DBs but I wouldn't say that 25ms for a non key field update is impressive. I suspect that a similar update would perform better on Oracle, MySql, ...