No matter how much I tinker with the settings in yarn-site.xml
i.e using all of the below options
yarn.scheduler.minimum-allocation-vcores
yarn.nodemanager.resource.memory-mb
yarn.nodemanager.resource.cpu-vcores
yarn.scheduler.maximum-allocation-mb
yarn.scheduler.maximum-allocation-vcores
i just still cannot get my application i.e Spark to utilize all the cores on the cluster. The spark executors seem to be correctly taking up all the available memory, but each executor just keeps taking a single core and no more.
Here are the options configured in spark-defaults.conf
spark.executor.cores 3
spark.executor.memory 5100m
spark.yarn.executor.memoryOverhead 800
spark.driver.memory 2g
spark.yarn.driver.memoryOverhead 400
spark.executor.instances 28
spark.reducer.maxMbInFlight 120
spark.shuffle.file.buffer.kb 200
Notice that spark.executor.cores
is set to 3, but it doesn't work.
How do i fix this?
The problem lies not with yarn-site.xml
or spark-defaults.conf
but actually with the resource calculator that assigns the cores to the executors or in the case of MapReduce jobs, to the Mappers/Reducers.
The default resource calculator i.e org.apache.hadoop.yarn.util.resource.DefaultResourceCalculator
uses only memory information for allocating containers and CPU scheduling is not enabled by default. To use both memory as well as the CPU, the resource calculator needs to be changed to org.apache.hadoop.yarn.util.resource.DominantResourceCalculator
in the capacity-scheduler.xml
file.
Here's what needs to change.
<property>
<name>yarn.scheduler.capacity.resource-calculator</name>
<value>org.apache.hadoop.yarn.util.resource.DominantResourceCalculator</value>
</property>
I had the similar kind of issue and from my code i was setting up the spark.executor.cores as 5
.
Even though it was just taking 1 which is the default core. In the spark UI and in environment tab i was seeing 5 cores. But while checking the executors tabs i was just able to see 1 process is in RUNNING state against an executor.
I was using the spark version 1.6.3.
So then i have tried to hit the spark-submit command as
--conf spark.executor.cores=5
which is working fine as using 5 cores
or just
--executor-cores 5
which also works.
来源:https://stackoverflow.com/questions/29964792/apache-hadoop-yarn-underutilization-of-cores