Why are my Gradle builds dying with exit-code 137?

后端 未结 9 2031
醉酒成梦
醉酒成梦 2020-12-07 23:48

I\'ve been trying to compile and test a large project to use Gradle. The test run fine until they die unexpectedly. I dug around and resources said that this is due to a mem

相关标签:
9条回答
  • 2020-12-08 00:47

    This issue seems to be related to Linux rather than Gradle as stated in the Jenkins docs:

    In cases where virtual memory is running short the kernel OOM (Out of Memory) killer may forcibly kill Jenkins or individual builds. If this occurs on Linux you may see builds terminate with exit code 137 (128 + signal number for SIGKILL). The dmesg command output will show log messages that will confirm the action that the kernel took.

    https://wiki.jenkins-ci.org/display/JENKINS/I'm+getting+OutOfMemoryError

    0 讨论(0)
  • 2020-12-08 00:48

    I've had similar issue on DigitalOcean's server, my gradle build failed completely on test stage with very similar stacktrace and without a single test being executed.

    It is stated in Gradle docs that gradle daemon should not be run in CI environments. So I just added --no-daemon to my build command and everything worked well and good. Also stopping daemon with ./gradlew --stop has been useful, but it worked only for a single build - the next one also failed.

    My build command:

    ./gradlew build --no-daemon
    
    0 讨论(0)
  • 2020-12-08 00:48

    I've also been having the same problem on CircleCI, but I didn't have any luck with any of the above. This is what I found:

    • Adding -Dorg.gradle.daemon=false to my CircleCI config.yml stopped the daemon from being used, but didn't fix the problem.
    • Adding -Dorg.gradle.workers.max=2 to GRADLE_OPTS, or --max-workers 2 to the gradle command didn't seem to have much/any effect from what I could see. I tried --max-workers=2 just in case, because both formats seem to be floating around on Google. I connected to my CircleCI container, and in top I could still see 3-4 Java processes forking off, so not sure this is doing anything?
    • I also tried max workers = 1 in the combinations above.
    • Tried -XX:+UnlockExperimentalVMOptions and -XX:+UseCGroupMemoryLimitForHeap in both JVM args, and in the test {} configuration inside my build as suggested by Baptiste Mesta. I don't see how this could work; I would have thought the multiple forked processes don't know what proportion of the container memory the other processes are using up? Unless I'm not understanding it correctly.

    In the end, I fixed it just by being nice and explicit with the memory settings, rather than using magic:

    • Circle CI config: GRADLE_OPTS: -Xmx2048m -Dorg.gradle.daemon=false
    • Gradle build: test { maxHeapSize = "512m" }

    Edit: You may need to go lower, depending on whether you have other processes running.

    0 讨论(0)
提交回复
热议问题