How to set the VCORES in hadoop mapreduce/yarn?

倾然丶 夕夏残阳落幕 提交于 2019-12-21 02:42:27

问题


The following are my configuration :

**mapred-site.xml**
map-mb : 4096 opts:-Xmx3072m
reduce-mb : 8192 opts:-Xmx6144m

**yarn-site.xml**
resource memory-mb : 40GB
min allocation-mb : 1GB

the Vcores in hadoop cluster displayed 8GB but i dont know how the computation or where to configure it.

hope someone could help me.


回答1:


Short Answer

It most probably doesn't matter, if you are just running hadoop out of the box on your single-node-cluster or even a small personal distributed cluster. You just need to worry about memory.

Long Answer

vCores are used for larger clusters in order to limit CPU for different users or applications. If you are using YARN for yourself there is no real reason to limit your container CPU. That is why vCores are not even taken into consideration by default in Hadoop !

Try setting your available nodemanager vcores to 1. It doesn't matter ! Your number of containers will still be 2 or 4 .. or whatever the value of :

yarn.nodemanager.resource.memory-mb / mapreduce.[map|reduce].memory.mb

If really do want the number of containers to take vCores into consideration and be limited by :

yarn.nodemanager.resource.cpu-vcores / mapreduce.[map|reduce].cpu.vcores

then you need to use a different a different Resource Calculator. Go to your capacity-scheduler.xml config and change DefaultResourceCalculator to DominantResourceCalculator.

In addition to using vCores for container allocation, you want to use vCores to really limit CPU usage of each node ? You need to change even more configurations to use the LinuxContainerExecutor instead of the DefaultContainerExecutor, because it can manage linux cgroups which are used to limit CPU resources. Follow this page if you want more info on this.




回答2:


yarn.nodemanager.resource.cpu-vcores - Number of CPU cores that can be allocated for containers.

mapreduce.map.cpu.vcores - The number of virtual CPU cores allocated for each map task of a job

mapreduce.reduce.cpu.vcores - The number of virtual CPU cores for each reduce task of a job




回答3:


I accidentally came across this question and I eventually managed to find the answers that I needed, so I will try to provide a complete answer.

Entities and they relations For each hadoop application/job, you have an Application Master that communicates with the ResourceManager about available resources on the cluster. The ResourceManager receives information about available resources on each node from each NodeManager. The resources are called Containers (memory and CPU). For more information see this.

Resource declaration on the cluster Each NodeManager provides information about its available resources. Relevant settings are yarn.nodemanager.resource.memory-mb and yarn.nodemanager.resource.cpu-vcores in $HADOOP_CONF_DIR/yarn-site.xml. They declare the memory and cpus that can be allocated to Containers.

Ask for resources For your jobs you can configure what resources are needed by each map/reduce. This can be done as follows (this is for the map tasks).

conf.set("mapreduce.map.cpu.vcores", "4");
conf.set("mapreduce.map.memory.mb", "2048");

This will ask for 4 virtual cores and 2048MB of memory for each map task.

You can also configure the resources that are necessary for the Application Master the same way with the properties yarn.app.mapreduce.am.resource.mb and yarn.app.mapreduce.am.resource.cpu-vcores.

Those properties can have default values in $HADOOP_CONF_DIR/mapred-default.xml.

For more options and default values I would recommend you to take a look at this and this



来源:https://stackoverflow.com/questions/26522967/how-to-set-the-vcores-in-hadoop-mapreduce-yarn

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!