I have a java app which runs just fine (on Ubuntu 10.04) for few hours until it hits \"java.net.SocketException: Too many open files\". The code for Sender.java can be found
You might also want to check the linux maximum open file limit. This related link is for a bespoke java-based product but it nicely explains the steps required to fix the issue.
(RESOLVED)
I recently have the same error because the var/log on DataBase Server is Full.
#df -h
S.ficheros Size Used Avail Use% Montado en
/dev/cciss/c0d0p3 126G 126G 0G 100% /var
#echo "">/var/log/postgresql/postgresql.log
Now, the error is gone!!!
important: see what to log in postgresql.log, review your postgresql.conf
bye
@_jpgo
I solve the problem by closing the connection in the finally block。
public static String postMethod(String jsonStr,String sendUrl){
String resultJson = "";
HttpClient httpClient = new HttpClient();
PostMethod post = new PostMethod(sendUrl);
post.setRequestHeader("Content-Type","application/x-www-form-urlencoded;charset=utf-8");
NameValuePair[] param = { new NameValuePair("message",jsonStr)} ;
post.setRequestBody(param);
try {
httpClient.executeMethod(post);
resultJson = post.getResponseBodyAsString();
} catch (HttpException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}finally{
post.releaseConnection();
((SimpleHttpConnectionManager)httpClient.getHttpConnectionManager()).shutdown();
}
return resultJson;
}
On line 438 you get the response as a stream and convert that to a byte array. The InputStream returned by entity.getContent() does not get closed. This could be contributing to the problem. Also, the HttpEntity.consumeContent() is deprecated for related reasons.
"java.net.SocketException: Too many files open"can be seen any Java Server application e.g. Tomcat, Weblogic, WebSphere etc, with client connecting and disconnecting frequently.
Please note that socket connections are treated like files and they use file descriptor, which is a limited resource.
Different operating system has different limits on number of file handles they can manage.
In short, this error is coming because clients are connecting and disconnecting frequently.If you want to handle it on your side, you have two options :
1) Increase number of open file handles or file descriptors per process.
In UNIX based operating system e.g. Ubuntu or Solaris, you can use command ulimit -a to find out how many open file handles per process is allowed.
$ ulimit -a
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
open files (-n) 256
pipe size (512 bytes, -p) 10
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 2048
virtual memory (kbytes, -v) unlimited
You can see that, open files (-n) 256, which means only 256 open file handles per process is allowed. If your Java program, remember Tomcat, weblogic or any other application server are Java programs and they run on JVM, exceeds this limit, it will throw java.net.SocketException: Too many files open error.
You can change this limit by using ulimit -n to a larger number e.g. 4096, but do it with advise of UNIX system administrator and if you have separate UNIX support team, than better escalate to them.
2) Reduce timeout for TIME_WAIT state in your operating system
In UNIX based systems, you can see current configuration in /proc/sys/net/ipv4/tcp_fin_timeout file.
In Windows based system, you can see this information in windows registry. You can change the TCPTIME_WAIT timeout in Windows by following below steps :
1) Open Windows Registry Editor, by typing regedit in run command window
2) Find the key HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\tcpip\Parameters
3) Add a new key value pair TcpTimedWaitDelay asa decimal and set the desired timeout in seconds (60-240)
4) Restart your windows machine.