We have a java process running on Solaris 10 serving about 200-300 concurrent users. The administrators have reported that memory used by process increases significantly over ti
I have encountered a similar problem and found a resolution:
Solaris 11
JDK10
REST application using HTTPS (jetty server)
There was a significant increase of c-heap (observed via pmap) over time
I decided to do some stress tests with libumem. So i started the proces with
UMEM_DEBUG=default UMEM_LOGGING=transaction LD_PRELOAD=libumem.so.1
and stressed the application with https requests. After a while I connected to the process with mdb. In mdb I used the command ::findleaks and it showed this as a leak:
libucrypto.so.1`ucrypto_digest_init
So it seems than the JCA (Java Cryptography Architecture) implementation OracleUcrypto has some issues on Solaris.
The problem was resolved by updating of the $JAVA_HOME/conf/security/java.security file - I changed the priority of OracleUcrypto to 3 and the SUN implementation to 1
security.provider.3=OracleUcrypto
security.provider.2=SunPKCS11 ${java.home}/conf/security/sunpkcs11-solaris.cfg
security.provider.1=SUN
After this the problem dissapeared.
This also explains why there is no problem on linux - since there are different implememntations of JCA providers in play
In garbage collected environments, holding on to unused pointers amounts to "failure to leak" and prevents the GC from doing its job. It's really easy to accidentally keep pointers around.
A common culprit is hashtables. Another is arrays or vectors which are logically cleared (by setting the reuse index to 0) but where the actual contents of the array (above the use index) is still pointing to something.