MongoDB poor write performance on large collections with 50.000.000 documents plus

前端 未结 2 1329
盖世英雄少女心
盖世英雄少女心 2021-01-31 17:49

I have got a MongoDB which store product data for 204.639.403 items, those data has already spitted up, by the item\'s country, into four logical

相关标签:
2条回答
  • 2021-01-31 18:35

    Would you consider using a database with better throughput that supports documents? I've heard success stories with TokuMX. And FoundationDB (where I'm an engineer) has very good performance with high-concurrent write loads and large documents. Happy to answer further questions about FoundationDB.

    0 讨论(0)
  • 2021-01-31 18:40

    Most likely you are running into issues due to record growth, see http://docs.mongodb.org/manual/core/write-performance/#document-growth.

    Mongo prefers records of fixed (or at least bounded) size. Increasing the record size beyond the pre-allocated storage will cause the document to be moved to another location on disk, multiplying your I/O with each write. Consider pre-allocating "enough" space for your average document on insert, if your document sizes are relatively homogenous. Otherwise consider splitting rapidly growing nested arrays into a separate collection, thereby replacing updates with inserts. Also check your fragmentation and consider compacting your databases from time to time, so that you have a higher density of documents per block which will cut down on hard page faults.

    0 讨论(0)
提交回复
热议问题