Bulk insert performance in MongoDB for large collections

微笑、不失礼 提交于 2019-12-09 05:14:00

问题


I'm using the BulkWriteOperation (java driver) to store data in large chunks. At first it seems to be working fine, but when the collection grows in size, the inserts can take quite a lot of time.

Currently for a collection of 20M documents, bulk insert of 1000 documents could take about 10 seconds.

Is there a way to make inserts independent of collection size? I don't have any updates or upserts, it's always new data I'm inserting.

Judging from the log, there doesn't seem to be any issue with locks. Each document has a time field which is indexed, but it's linearly growing so I don't see any need for mongo to take the time to reorganize the indexes.

I'd love to hear some ideas for improving the performance

Thanks


回答1:


You believe that the indexing does not require any document reorganisation and the way you described the index suggests that a right handed index is ok. So, indexing seems to be ruled out as an issue. You could of course - as suggested above - definitively rule this out by dropping the index and re running your bulk writes.

Aside from indexing, I would …

  • Consider whether your disk can keep up with the volume of data you are persisting. More details on this in the Mongo docs
  • Use profiling to understand what’s happening with your writes



回答2:


  1. Do have any index in your collection? If yes, it has to take time to build index tree.
  2. is data time-series? if yes, use updates more than inserts. Please read this blog. The blog suggests in-place updates more efficient than inserts (https://www.mongodb.com/blog/post/schema-design-for-time-series-data-in-mongodb)
  3. do you have a capability to setup sharded collections? if yes, it would reduce time (tested it in 3 sharded servers with 15million ip geo entry records)



回答3:


  • Disk utilization & CPU: Check the disk utilization and CPU and see if any of these are maxing out. Apparently, it should be the disk which is causing this issue for you.

  • Mongo log: Also, if a 1000 bulk query is taking 10sec, then check for mongo log if there are any few inserts in the 1000 bulk that are taking time. If there are any such queries, then you can narrow down your analysis

Another thing that's not clear is the order of queries that happen on your Mongo instance. Is inserts the only operation that happens or there are other find queries that run too? If yes, then you should look at scaling up whatever resource is maxing out.



来源:https://stackoverflow.com/questions/30736231/bulk-insert-performance-in-mongodb-for-large-collections

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!