MongoDB very slow deletes

前端 未结 2 1507
猫巷女王i
猫巷女王i 2021-01-03 01:17

I\'ve got a small replica set of three mongod servers (16GB RAM each, at least 4 CPU cores and real HDDs) and one dedicated arbiter. The replicated data has about 100,000,0

相关标签:
2条回答
  • 2021-01-03 01:33

    This is happening because even though

    db.repo.remove({"date" : {"$lt" : new Date(1362096000000)}})
    

    looks like a single command it's actually operating on many documents - as many as satisfy this query.

    When you use replication, every change operation has to be written to a special collection in the local database called oplog.rs - oplog for short.

    The oplog has to have an entry for each deleted document and every one of those entries needs to be applied to the oplog on each secondary before it can also delete the same record.

    One thing I can suggest that you consider is TTL indexes - they will "automatically" delete documents based on expiration date/value you set - this way you won't have one massive delete and instead will be able to spread the load more over time.

    0 讨论(0)
  • 2021-01-03 01:38

    Another suggestion that may not fit you, but it was optimal solution for me:

    1. drop indeces from collection
    2. iterate over all entries of collection and store id's of records to delete into memory array
    3. each time array is big enough (for me it was 10K records), i removed these records by ids
    4. rebuild indeces

    It is the fastest way, but it requires stopping the system, which was suitable for me.

    0 讨论(0)
提交回复
热议问题