How to improve MongoDB insert performance

前端 未结 4 436
粉色の甜心
粉色の甜心 2020-12-24 13:00

The result:

If you are operating on a dataset that is fault tolerant, or doing a one time process you can verify, changing WriteAcknowledge

相关标签:
4条回答
  • 2020-12-24 13:20

    "There is not a substantial read rate on the database so Sharding would not improve matters, although perhaps I am wrong."

    An update involves a read. aka finding that forsaken _id -- so perhaps sharding might be helpful if not v helpful

    0 讨论(0)
  • 2020-12-24 13:32

    You can try to modify the Write concern levels. Obviously there is a risk on this, as you wouldn't be able to catch any writing error, but at least you should still be able to capture network errors. As MongoDB groups the bulk insert operations in groups of 1000, this should speed up the process.

    W by default is 1:

    enter image description here

    When you change it to 0:

    enter image description here

    If you are not concern about the order of elements, you can gain some speed calling the unordered bulk operation

    await m_Collection.BulkWriteAsync(updates, new BulkWriteOptions() { IsOrdered = false });
    

    With an unordered operations list, MongoDB can execute in parallel the write operations in the list and in any order. Link

    0 讨论(0)
  • 2020-12-24 13:32

    Marked answer here is good. I want to add an additional code to help others who use InsertMany instead of BulkWriteAsync to take benefit of IsOrdered = false quicker

        m_Collection.InsertMany(listOfDocument, new InsertManyOptions() { IsOrdered = false });
    
    0 讨论(0)
  • 2020-12-24 13:43

    We switched to Cassandra because Mongo doesn't scale well. If you say that after 80M you saw a performance degradation, easily it is related on memory. I'm more expert on SQL DBs but I wouldn't say that 25ms for a non key field update is impressive. I suspect that a similar update would perform better on Oracle, MySql, ...

    0 讨论(0)
提交回复
热议问题