Unicorn Eating Memory

后端 未结 5 1715
执念已碎
执念已碎 2021-02-08 00:11

I have a m1.small instance in amazon with 8GB hard disk space on which my rails application runs. It runs smoothly for 2 weeks and after that it crashes saying the memory is ful

相关标签:
5条回答
  • 2021-02-08 00:13

    As Preston mentioned you don't have a memory problem (over 40% free), you have a disk full problem. du reports most of the storage is consumed in /root/data.

    You could use find to identify very large files, eg, the following will show all files under that dir greater than 100MB in size.

    sudo find /root/data -size +100M
    

    If unicorn is still running, lsof (LiSt Open Files) can show what files are in use by your running programs or by a specific set of processes (-p PID), eg:

    sudo lsof | awk  '$5 ~/REG/ && $7 > 100000000 { print }'
    

    will show you open files greater than 100MB in size

    0 讨论(0)
  • 2021-02-08 00:17

    i've just released 'unicorn-worker-killer' gem. This enables you to kill Unicorn worker based on 1) Max number of requests and 2) Process memory size (RSS), without affecting the request.

    It's really easy to use. No external tool is required. At first, please add this line to your Gemfile.

    gem 'unicorn-worker-killer'
    

    Then, please add the following lines to your config.ru.

    # Unicorn self-process killer
    require 'unicorn/worker_killer'
    
    # Max requests per worker
    use Unicorn::WorkerKiller::MaxRequests, 10240 + Random.rand(10240)
    
    # Max memory size (RSS) per worker
    use Unicorn::WorkerKiller::Oom, (96 + Random.rand(32)) * 1024**2
    

    It's highly recommended to randomize the threshold to avoid killing all workers at once.

    0 讨论(0)
  • 2021-02-08 00:19

    Try removing newrelic for your app if you are using newrelic. Newrelic rpm gem itself leaking the memory. I had the same issue and I stratched my head for almost 10day to figure out the issue.

    Hope that help you.

    I contact newrelic support team and below is their reply.

    Thanks for contacting support. I am deeply sorry for the frustrating experience you have had. As a performance monitoring tool, our intention is "first do no harm", and we take these kind of issues very seriously.

    We recently identified the cause of this issue and have released a patch to resolve it. (see https://newrelic.com/docs/releases/ruby). We hope you'll consider resuming monitoring with New Relic with this fix. If you are interested in doing so, make sure you are using at least v3.6.8.168 from now on.

    Please let us know if you have any addition questions or concerns. We're eager to address them.

    Even if I tried update newrelic gem but it still leaking the memory. Finally I have to remove the rewrelic although it is a great tool but we can not use it at such cost(memory leak).

    Hope that help you.

    0 讨论(0)
  • 2021-02-08 00:25

    I think you are conflating memory usage and disk space usage. It looks like Unicorn and its children were using around 500 MB of memory, you look at the second "-/+ buffers/cache:" number to see the real free memory. As far as the disk space goes, my bet goes on some sort of log file or something like that going nuts. You should do a du -h in the data directory to find out what exactly is using so much storage. As a final suggestion, it's a little known fact that Ruby never returns memory back to the OS if it allocates it. It DOES still use it internally, but once Ruby grabs some memory the only way to get it to yield the unused memory back to the OS is to quit the process. For example, if you happen to have a process that spikes your memory usage to 500 MB, you won't be able to use that 500 MB again, even after the request has completed and the GC cycle has run. However, Ruby will reuse that allocated memory for future requests, so it is unlikely to grow further.

    Finally, Sergei mentions God to monitor the process memory. If you are interested in using this, there is already a good config file here. Be sure to read the associated article as there are key things in the unicorn config file that this god config assumes you have.

    0 讨论(0)
  • 2021-02-08 00:29

    You can set up god to watch your unicorn workers and kill them if they eat too much memory. Unicorn master process will then fork another worker to replace this one. Problem worked around. :-)

    0 讨论(0)
提交回复
热议问题