I have a misbehaving application that seems to leak. After a brief profiler investigation, most memory (80%) is held by java.lang.ref.Finalizer
instances. I sus
The item 7 of Effective Java second edition is: "Avoid finalizers". I strongly recommend you to read it. Here is an extract that may help you:
"Explicit termination methods are typically used in combination with try-finally construct to ensure termination"
Both quotes say:
An exception will cause finalization of this object to be halted/terminated.
Both quotes also say:
The uncaught exception is ignored (i.e. not logged or handled by the VM in any way)
So that answers the first half of your question. I don't know enough about Finalizers to give you advice on tracking down your memory leak though.
EDIT: I found this page which might be of use. It has advice such as setting fields to null manually in finalizers to allow the GC to reclaim them.
EDIT2: Some more interesting links and quotes:
From Anatomy of a Java Finalizer
Finalizer threads are not given maximum priorities on systems. If a "Finalizer" thread cannot keep up with the rate at which higher priority threads cause finalizable objects to be queued, the finalizer queue will keep growing and cause the Java heap to fill up. Eventually the Java heap will get exhausted and a java.lang.OutOfMemoryError will be thrown.
and also
it's not guaranteed that any objects that have a finalize() method are garbage collected.
EDIT3: Upon reading more of the Anatomy link, it appears that throwing exceptions in the Finalizer thread really slows it down, almost as much as calling Thread.yield(). You appear to be right that the Finalizer thread will eventually flag the object as able to be GC'd even if an exception is thrown. However, since the slowdown is significant it is possible that in your case the Finalizer thread is not keeping up with the object-creation-and-falling-out-of-scope rate.
i have once see a similar problem, that is the finalizer thread can not catch up with the rate of generating finalizable objects.
my solution is the make a closed loop control, by use the MemoryMXBean .getObjectPendingFinalizationCount(), a PD( proportinal and diffrential) control algo to control the speed we generate the finalizable objects, since we have a single entry to create it, just sleep number of seconds with the result of pd algo. it works well, though you need to tune the paramter for pd algo.
hope it helps.
My first step would be to establish whether this is a genuine memory leak or not.
The points raised in the previous answers all relate to the speed at which objects are collected, not the question of whether your objects are collected at all. Only the latter is a genuine memory leak.
We had a similar predicament on my project, and ran the application in "slow motion" mode to figure out if we had a real leak. We were able to do this by slowing down the stream of input data.
If the problem disappears when you run in "slow motion" mode, then the problem is probably one of the ones suggested in the previous answers, i.e. the Finalizer thread can't process the finalizer queue fast enough.
If that is the problem, it sounds like you might need to do some non-trivial refactoring as described in the page Bringer128 linked to, e.g.
Now let's look at how to write classes that require postmortem cleanup so that their users do not encounter the problems previously outlined. The best way to do so is to split such classes into two -- one to hold the data that need postmortem cleanup, the other to hold everything else -- and define a finalizer only on the former
I have same issue with you (below picture). For our case, it because an object has wait(0)
in its finalize and it never get notified, which block the java.lang.ref.Finalizer$FinalizerThread. More reference