Very strange OutOfMemoryError

旧时模样 提交于 2019-12-04 02:18:51

I want to close this question with a summary of the current state.

The last test is now up over 60 hours without any problems. That leads us to the following summary/conclusions:

  • We have a high throughput server using lots of objects that in the end implement "finalize". These objects are mostly JNA memory handles and file streams. Building the Finalizers faster than GC and finalizer thread are able to clean up, this process fails after ~3 hours. This is a well known phenomenon (-> google).
  • We did some optimizations so the server got rid of nearly all the JNA Finalizers. This version was tested with jProfiler attached.
  • The server died some hours later than our initial attempt...
  • The profiler showed a huge amount of finalizers, this time caused mostly only by file streams. This queue was not cleaned up, even after pausing the server for some time.
  • Only after manually triggering "System.runFinalization()", the queue was emptied. Resuming the server started to refill...
  • This is still inexplicable. We now guess this is some profiler interaction with GC/finalization.
  • To debug what could be the reason for the inactive finalizer thread we detached the profiler and attached the debugger this time.
  • The system was running without noticeable defect... FinalizerThread and GC all "green".
  • We resumed the test (now for the first time again without any agents besides jConsole attached) and its up and fine now for over 60 hours. So apparently the initial JNA refactoring solved the issue, only the profiling session added some indeterminism (greetings from Heisenberg).

Other strategies for managing the finalizers are for example discussed in http://cleversoft.wordpress.com/2011/05/14/out-of-memory-exception-from-finalizer-object-overflow/ (besides the not overly clever "don't use finalizers"..).

Thank's for all your input.

Difficult to give a specific answer to your dilemma but take a heap dump and run it through IBM's HeapAnalyzer. Search for "heap analyzer at: http://www.ibm.com/developerworks (direct link keeps changing). Seems highly unlikely the finalizer thread "suddenly stops working" if you are not overriding finalize.

It is possible for the Finalizer to be blocked, but I don't know how it could simply die.

If you have lots of FileInputStream and FileOutputStream finalize() methods, this indicates you are not closing your files correctly. Make sure that these stream are always closed in a finally block or use Java 7's ARM. (Automatic Resource Management)

jConsole he's WAITING.

To be WAITING it has to be waiting on an object.

Both FileInputStream and FileOutputStream have same comment in their finalize() methods:

. . .
/*
 * Finalizer should not release the FileDescriptor if another
 * stream is still using it. If the user directly invokes
 * close() then the FileDescriptor is also released.
 */
     runningFinalize.set(Boolean.TRUE); 
. . . 

which means your Finalizer may be waiting for stream to be released. Which means that, as Joop Eggen mentioned above, your app may be doing something bad when closing one of the streams.

My guess: it is an overriden close in your own stream (wrapper) classes. As the stream classes often are wrappers and do delegate to others, I could imagine that such a nested new A(new B(new C())) might cause some wrong logic on close. You should look for twice closing, delegate closing. And maybe still some forgotten close (close on the wrong object?).

With a slow growing heap the Java garbage collector can run out of memory when it tries to belatedly garbage collect in a low memory situation. Try turning on the concurrent mark and sweep garbage collection with -XX:+UseConcMarkSweepGC and see if your problem goes away.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!