问题
We use ArangoDB 3.3.14 (Community Edition) with MMFiles storage engine for a relatively large data set (a bit over 30 GB when you back it up). We run it inside of a docker container using ECS. Our host VM has 64 GB of RAM and we have dedicated 55 GBs exclusively for ArangoDB container (we set a hard limit for that container to 55 GBs).
When ArangoDB is just started and have all the collections loaded into the RAM it would take about 45 GBs, so we have about 10 GBs of free RAM to be used for queries, etc.
The problem is that after some period of time (depending on usage) ArangoDB eats all the 55 GB of RAM and does not stop there. It continues to consume the RAM over the set hard limit and at some point, docker kills the container with exit code 137 and the status reason OutOfMemoryError: Container killed due to memory usage.
The restart causes a lot of problems for us because we need to wait until all the collections and graphs are loaded back into the RAM again. It takes about 1-1.5 hours for our data set and you can not use ArangoDB while it is "restarting".
My question is how can I limit ArangoDB RAM usage, let's say to 54 GBs, so it never reaches a hard memory limit set for a docker container?
回答1:
In 3.3.20, ArangoDB introduces the parameter {{total-write-buffer-size}} which limits the write buffer. You can try adding this to your configuration file:
[rocksdb]
block-cache-size = <value in bytes> # 30% RAM
total-write-buffer-size = <value in bytes> # 30% RAM
enforce-block-cache-size-limit = true
[cache]
size = <value in bytes> # 20% RAM
or you can pass parameter to the command line:
arangod --cache.size <value in bytes> # 20% RAM \
--rocksdb.block-cache-size <value in bytes> # 30% RAM \
--rocksdb.total-write-buffer-size <value in bytes> # 30% RAM \
--rocksdb.enforce-block-cache-size-limit true
You can also tune how much memory assign per single component as per your usage. But you have to upgrade at least to 3.3.20.
回答2:
yes, right, those specific parameters are for RocksDB (apart --cache.size). Probably in your case is better to move to ROcksDB, which has several advantages:
- document-level locks
- support for large data-sets
- persistent indexes
And you can limit the memory consumption as well (starting from 3.3.20 on Linux). With MMFILES both collections and indexes have to fit in memory.
来源:https://stackoverflow.com/questions/54416790/how-to-limit-arangodb-ram-usage-inside-of-a-docker-container