I have a Rails 3.2.8 app running on Heroku Cedar with Ruby 1.9.3. The app runs fine when it launches but after a day or so of continuous use, I start to see R14 errors on my logs. Once the memory errors start, they never go away, even if the app is idle for several hours.
Shouldnt the garbage collector clean up unused objects after a while and reduce the memory load? It seems this is not happening on Heroku. Generally, memory usage starts to creep up after running some reports with several thousand rows of data, although results are paginated.
How can I find the memory leak? Plugins like bleak_house are way out of date or dont run nicely in the Heroku environment. Can I adjust the GC settings to make it more aggressive?
The GC should do the clean up, and probably does.
You can force the GC with GC.start
; if many objects were not collected this will, but I suspect that is not the issue.
Is it possible you somehow create a bunch of objects and never release them, by keeping cached copies or something?
I'm unfamiliar with the existing tools to check this, but you may want to check which objects exist using ObjectSpace
. For example:
ObjectSpace.each_object.with_object(Hash.new(0)){|obj, h| h[obj.class] +=1 }
# => a Hash with the number of objects by class
If you get an unexpected number for one of your classes, for instance, you would have a better idea of where to look for.
Install the New Relic add-on. It has a bunch of useful metrics that you can use to find out the source of the leak. I think its generally a better idea to try to see which part of the code takes the longest to execute and perhaps try to optimize that, rather than tweak the GC outright.
Some of the nice features New Relic includes is being able to pinpoint the source of the longest running SQL query, for example. I encourage you to give it a try.
来源:https://stackoverflow.com/questions/13311930/how-can-i-find-a-memory-leak-on-heroku