Hadoop Yarn Container Does Not Allocate Enough Space

前端 未结 2 1292
说谎
说谎 2020-12-30 09:27

I\'m running a Hadoop job, and in my yarn-site.xml file, I have the following configuration:

    
            yarn.scheduler.mini         


        
2条回答
  •  一整个雨季
    2020-12-30 09:58

    You should also properly configure the memory allocations for MapReduce. From this HortonWorks tutorial:

    [...]

    For our example cluster, we have the minimum RAM for a Container (yarn.scheduler.minimum-allocation-mb) = 2 GB. We’ll thus assign 4 GB for Map task Containers, and 8 GB for Reduce tasks Containers.

    In mapred-site.xml:

    mapreduce.map.memory.mb: 4096

    mapreduce.reduce.memory.mb: 8192

    Each Container will run JVMs for the Map and Reduce tasks. The JVM heap size should be set to lower than the Map and Reduce memory defined above, so that they are within the bounds of the Container memory allocated by YARN.

    In mapred-site.xml:

    mapreduce.map.java.opts: -Xmx3072m

    mapreduce.reduce.java.opts: -Xmx6144m

    The above settings configure the upper limit of the physical RAM that Map and Reduce tasks will use.

    Finally, someone in this thread in the Hadoop mailing list had the same problem and in their case, it turned out they had a memory leak in their code.

提交回复
热议问题