I\'m new to GlassFish, and to application servers in general. I have an Amazon EC2 instance running Ubuntu and have installed GlassFish 4. It starts up without problems, but aft
I am facing the exact same situation. I suspected the reason was that JVM ran out of RAM, as the free EC2 instance has a niggard 600 MB (cat /proc/meminfo
to verify). To look for clues I turned on JVM logging for Glassfish domain, in order to do so I added the following lines to JVM parameters in
:
-XX:LogFile=${com.sun.aas.instanceRoot}/logs/jvm.log
-XX:+LogVMOutput
Later when Glassfish shut down, the jvm.log
contained lots of messages like the following:
I never found out what they really meant, but I'm posting them here in case someone takes the same road as me and googles for them.
Then finally I looked in /var/log/syslog
(the one I found was actually named syslog.1
), and voila! I got the confirmation that JVM process ran out of memory and was killed:
Dec 20 07:44:44 ip-172-31-33-222 kernel: [1518108.211801] Out of memory: Kill process 22248 (java) score 743 or sacrifice child
Dec 20 07:44:44 ip-172-31-33-222 kernel: [1518108.211833] Killed process 22248 (java) total-vm:1622220kB, anon-rss:447752kB, file-rss:0kB
It seems to me that increasing swap space should fix the problem. It turns out on EC2 swap space is 0 by default, so I allocated 1 GB, see How do you add swap to an EC2 instance?
The server used to crash on a daily basis, but with swap on it hasn't crashed in weeks.