I'm seeing large variations in memory usage for a java process (running in docker), when compared to the memory usage as reported by jcmd which is tracking the JVM process. The JVM memory usage according to jcmd is quite stable but the memory usage from ps (RSS) is constantly growing by approx double the JVM committed memory variance. The JVM variance is caused by small code cache increases and small internal memory increases.
Questions:
- What can cause such variance and how can I track it to fix the root cause?
- Why is there such a large variance between the memory usage as reported by ps and JCMD committed?
The memory as tracked by ps aux will keep increasing until the kernel OOMs (we are using memory soft limits to delay the inevitable OOM).
For example: Time point #1
- ps aux RSS 2682980
- JCMD Native Memory Tracking: Total: reserved=3625201KB, committed=2423489KB
Time point #2
- ps aux RSS 2775140
- JCMD Native Memory Tracking: Total: reserved=3627331KB, committed=2427371KB
Other details:
- Swap is disabled
- java.nio.BufferPool.MemoryUsed: 10.3MB
- JVM OPTS: -javaagent:/opt/newrelic/newrelic.jar -server -Xms1792m -Xmx1792m -XX:MetaspaceSize=128M -XX:MaxMetaspaceSize=192M -XX:+UseG1GC -XX:+UseStringDeduplication
Some versions:
- Linux: amzn-ami-xxx-amazon-ecs-optimized
- Docker version: 17.06.2-ce
- java version "1.8.0_121"
- Java(TM) SE Runtime Environment (build 1.8.0_121-b13)
- Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode)
来源:https://stackoverflow.com/questions/47591343/java-process-memory-usage-deviation-from-jcmd-committed-memory