I am having an issue with a JNI program randomly running out of memory.
This is a 32 bit java program which reads a file, does some image processing, typically using
My own approach to this problem is simply to call System.gc()
, but from inside the native code:
#include <jni.h>
// ...
int my_native_function(JNIEnv* env, jobject obj) {
jclass systemClass = nullptr;
jmethodID systemGCMethod = nullptr;
// ...
// Take out the trash.
systemClass = env->FindClass("java/lang/System");
systemGCMethod = env->GetStaticMethodID(systemClass, "gc", "()V");
env->CallStaticVoidMethod(systemClass, systemGCMethod);
}
I hope this works for you, too.
FWIW (and I realize this is kind of heresy) adding a call to
System.gc();
before the first JNI call for each file made a dramatic improvement to this situation. Instead of getting memory errors on 20% of the files it is now less than 5%. Even better the errors are no longer random but are repeatable from run to run so presumably they can be tracked down.
The following assumes you're using the hotspot jvm.
32bit processes are not just constrained by committed memory, far more importantly they're constrained by virtual memory, i.e. reserved address space. On 64bit systems you only have 4GB worth of addresses that can be used, on 32bit systems it's only 2-3GB.
The JVM will reserve a fixed, possibly large amount of address space for the managed heap up front, then dynamically allocates some internal structures on top of that amount and then possibly even more for DirectByteBuffers or memory-mapped files. This can leave very little room for native code to run.
Use Native Memory Tracking to determine how much various parts of the JVMs are using and pmap <pid>
to check for memory-mapped files. Then try to limit that without hampering your application.
Alternatively you could spawn a new process and do the image processing there.