MongoDB preload documents into RAM for better performance

无人久伴 提交于 2019-12-30 01:29:08

问题


I want MongoDB to hold query results in RAM for longer period of time (say 30 minutes if memory is available). Is it possible? OR is there any way i can make sure that the data is pre-loaded into RAM before subsequent queries on it.

In fact i am wondering about simple query results performance by MongoDB. I have a dedicated server with 10GB RAM and my db.stats() are as follows;

db.stats();
{
    "db": "test",
    "collections":16,
    "objects":625690,
    "avgObjSize":68.90,
    "dataSize":43061996,
    "storageSize":1121402888,
    "numExtents":74,
    "indexes":25,
    "indexSize":28207200,
    "fileSize":469762048,
    "nsSizeMB":16,
    "ok":1
}

Now when i query single document (as mentioned here) from a web service it loads in 1.3 seconds. Subsequent calls of same queries gives response in 400ms and then after few seconds, it again starts taking 1.3 seconds. Looks like MongoDB has lost the previous queried document from Memory, where as there is no other queries asking for data mapped to RAM.

Please explain this and let me know any way to make subsequent queries faster responding.


回答1:


Your observed performance problem on an initial query is likely one of the following issues (in rough order of likelihood):

1) Your application / web service has some overhead to initialize on first request (i.e. allocating memory, setting up connection pools, resolving DNS, ...).

2) Indexes or data you have requested are not yet in memory, so need to be loaded.

3) The Query Optimizer may take a bit longer to run on the first request, as it is comparing the plan execution for your query pattern.

It would be very helpful to test the query via the mongo shell, and isolate whether the overhead is related to MongoDB or your web service (rather than timing both, as you have done).

Following are some notes related to MongoDB.

Caching

MongoDB doesn't have a "caching" time for documents in memory. It uses memory-mapped files for disk I/O and the documents in memory are based on your active queries (documents/indexes you've recently loaded) as well as the available memory. The operating system's virtual memory manager is in charge of caching, and typically will follow a Least-Recently Used (LRU) algorithm to decide which pages to swap out of memory.

Memory Usage

The expected behaviour is that over time MongoDB will grow to use all free memory to store your active working data set.

Looking at your provided db.stats() numbers (and assuming that is your only database), it looks like your database size is current about 1Gb so you should be able to keep everything within your 10Gb total RAM unless:

  • there are other processes competing for memory
  • you have restarted your mongod server and those documents/indexes haven't been requested yet

In MongoDB 2.2, there is a new touch command you can use to load indexes or documents into memory after a server restart. This should only be used on initial startup to "warm up" the server, as otherwise you could be unhelpfully forcing actual "active" data out of memory.

On a linux system, for example, you can use the top command and should see that:

  • virtual bytes/VSIZE will tend to be the size of the entire database
  • if the server doesn't have other processes running, resident bytes/RSIZE will be the total memory of the machine (this includes file system cache contents)
  • mongod should not use swap (since the files are memory-mapped)

You can use the mongostat tool to get a quick view of your mongod activity .. or more usefully, use a service like MMS to monitor metrics over time.

Query Optimizer

The MongoDB Query Optimizer compares plan execution for a query pattern every ~1,000 write operations, and then caches the "winning" query plan until the next time the optimizer runs .. or you explicitly call an explain() on that query.

This should be a straightforward one to test: run your query in the mongo shell with .explain() and look at the ms timings, and also the number of index entries and documents scanned. The timing for an explain() isn't the actual time the queries will take to run, as it includes the cost of comparing the plans. The typical execution will be much faster .. and you can look for slow queries in your mongod log.

By default MongoDB will log all queries slower than 100ms, so this provides a good starting point to look for queries to optimize. You can adjust the slow ms value with the --slowms config option, or using the Database Profiler commands.

Further reading in the MongoDB documentation:

  • Caching
  • Checking Server Memory Usage
  • Database Profiler
  • Explain
  • Monitoring & Diagnostics


来源:https://stackoverflow.com/questions/13374342/mongodb-preload-documents-into-ram-for-better-performance

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!