I\'m working on a huge legacy Java application, with a lot of handwritten stuff, which nowadays you\'d let a framework handle.
The problem I\'m facing right now is
This is a coding pattern that helps find unclosed resources. It closes the resources and also complains in the log about the problem.
class
{
boolean closed = false;
File file;
close() {
closed = true;
file.close();
}
finalize() {
if (!closed) {
log error "OI! YOU FORGOT TO CLOSE A FILE!"
file.close();
}
}
Wrap the above file.close() calls in try-catch blocks that ignore errors.
Also, Java 7 has a new 'try-with-resource' feature that can auto-close resources.
It could certainly give you an idea. Since it's Java, the file open/close mechanics should be implemented similarly (unless one of the JVMs are implemented incorrectly). I would recommend using File Monitor on Windows.
I would start by asking my sysadmin to get a listing of all open file descriptors for the process. Different systems do this in different ways: Linux, for example, has the /proc/PID/fd
directory. I recall that Solaris has a command (maybe pfiles?) that will do the same thing -- your sysadmin should know it.
However, unless you see a lot of references to the same file, a fd list isn't going to help you. If it's a server process, it probably has lots of files (and sockets) open for a reason. The only way to resolve the problem is adjust the system limit on open files -- you can also check the per-user limit with ulimit, but in most current installations that equals the system limit.
To answer the second part of the question:
what can cause open file handles to run out?
Opening a lot of files, obviously, and then not closing them.
The simplest scenario is that the references to whatever objects hold the native handles (e.g., FileInputStream
) are thrown away before being closed, which means the files remain open until the objects are finalized.
The other option is that the objects are stored somewhere and not closed. A heap dump might be able to tell you what lingers where (jmap and jhat are included in the JDK, or you can use jvisualvm if you want a GUI). You're probably interested in looking for objects owning FileDescriptors.
I would double-check the environment settings on your Solaris box. I believe that by default Solaris only allows 256 file handles per process. For a server application, especially if it's running on a dedicated server, this is very low. Figure 50 or more descriptors for opening JRE and library JARs, and then at least one descriptor for each incoming request and database query, probably more, and you can see how this just won't cut the mustard for a serious server.
Have a look at the /etc/system
file, for the values of rlim_fd_cur
and rlim_fd_max
, to see what your system has set. Then consider whether this is reasonable (you can see how many file descriptors are open while the server is running with the lsof
command, ideally with the -p [process ID] parameter.
Not a direct answer to your question but these problems could be the result of releasing file resources incorrectly in your legacy code. By example if you're working with FileOutputsStream classes make sure the close methods are called in a finally block as in this example:
FileOutputsStream out = null;
try {
//You're file handling code
} catch (IOException e) {
//Handle
} finally {
if (out != null) {
try { out.close(): } catch (IOException e) { }
}
}