I have a Java application that is hosted on by a web hosting company. Every few days my app goes down with:
[2011-03-09 15:52:14,501] ERROR http-12021-9
ja
One possibility is that you have reached your user limit for the number of open files.
I believe that every Process/Thread consumes one or more file descriptors.
For example, when this occurs for your user then "no" shell command will work, since shell commands fork off a process to execute (you see errors like "-bash: fork: retry: Resource temporarily unavailable")
I hit this issue and found that only the current user was unable to spawn procs... other users were uneffected.
To resolve, up your ulimit -n (max files open) setting... details follow.
You can see your user limits with the command:
ulimit -a
Up your max file limit with the following:
ulimit -n 65536
Here is what I have right now:
$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 256797
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 75000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 100000
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
To see all the explicit limits for your system:
cat /etc/security/limits.conf
Please note: I'm using Oracle Linux 6.3 - results may differ slightly between distros.
Its most likely the problem with JVM at web-server end. Please check out the following link for some details,
http://blog.egilh.com/2006/06/2811aspx.html
When you fire up your process, the JVM has a limited heap size (default is 128MB). That server may well have more memory, but your JVM doesn't - you used it all.
You can change this with the -Xms
and -Xmx
command line arguments, but I would suggest finding the memory leak first :)
Did you do any memory tracking? Fire up jconsole and watch or log your memory consumption over a 24 hour period. If it (on average) goes up without coming back down, then you are running out of memory and possibly have insufficient memory to store the details of your new thread.
It is the problem with linux to handle no.of open files Give as follows ulimit -n 65536 (any number u can give)
This answer is to all, who are running java via systemd. (e.g. self created tomcat service)
I was running my java application on the server via Tomcat.
I also created a service unit in systemd for convenience, so it starts on server booting up and i can also control it via systemd (or service tomcat restart
).
But there are some default values for systemd units and their maximum allowed tasks (threads). For me on my machine it was 195 Tasks.
Once i changed the value in the service unit with TasksMax=1024
and reloaded it with systemctl daemon-reload
everything worked as expected.
For example the service unit file in /etc/systemd/system/tomcat.service
[Unit]
Description=Tomcat9
After=network.target
[Service]
Type=forking
User=tomcat9
Group=tomcat9
TasksMax=1048
Environment=CATALINA_PID=/opt/tomcat/tomcat9.pid
Environment=JAVA_HOME=/usr/lib/jvm/default-java
Environment=CATALINA_HOME=/opt/tomcat
Environment=CATALINA_BASE=/opt/tomcat
Environment="CATALINA_OPTS=-Xms2048m -Xmx28384m"
Environment="JAVA_OPTS=-Dfile.encoding=UTF-8 -Dnet.sf.ehcache.skipUpdateCheck=true -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled -XX:+UseParNewGC"
ExecStart=/opt/tomcat/bin/startup.sh
ExecStop=/opt/tomcat/bin/shutdown.sh
[Install]
WantedBy=multi-user.target