Find native memory leak in Java application using JeMalloc

后端 未结 1 1962
囚心锁ツ
囚心锁ツ 2021-01-05 06:05

Currently I am trying to resolve a Java memory issue: My Java application keeps using more and more memory and eventually it gets killed by the Linux OOM killer.

T

1条回答
  •  执笔经年
    2021-01-05 06:14

    Replacing allocator (with jemalloc or tcmalloc for instances) to profile memory usage may provide hint about source of native memory leak but it is limited to native code symbols available in libraries loaded in JVM.

    To have Java class/method in stack trace, it is required to generate a mapping file associating native code memory location with its origin. The only tool at time of writing is https://github.com/jvm-profiling-tools/perf-map-agent

    To get more than only "interpreter" names in stack, the concerned code has to be JIT-compiled, so enforcing with -XX:CompileThreshold=1 on JVM command line options is interesting (except in production IMO).

    When agent loaded in JVM, mapping file generated, and code JIT-compiled, perf can be used to report CPU profiling. Memory leak investigation requires more processing.

    The best option is to get bcc and its memleak tool if your Linux kernel is 4.9 or upper: https://github.com/iovisor/bcc/blob/master/tools/memleak_example.txt

    Many thanks to Brendan Gregg

    Debian system gets ready after a simple apt install bcc, but RedHat system requires more work as documented for CentOS 7 at http://hydandata.org/installing-ebpf-tools-bcc-and-ply-on-centos-7 (it is even worse on CentOS 6)

    As an alternative, perf only can also report leakage stack trace with specific probes. Scripts and example usage are available at https://github.com/dkogan/memory_leak_instrumentation but has to be adapted to Java context.

    0 讨论(0)
提交回复
热议问题