So I have a spark standalone server with 16 cores and 64GB of RAM. I have both the master and worker running on the server. I don't have dynamic allocation enabled. I am on Spark 2.0
What I dont understand is when I submit my job and specify:
--num-executors 2
--executor-cores 2
Only 4 cores should be taken up. Yet when the job is submitted, it takes all 16 cores and spins up 8 executors regardless, bypassing the num-executors
parameter. But if I change the executor-cores
parameter to 4
it will adjust accordingly and 4 executors will spin up.
Disclaimer: I really don't know if --num-executors
should work or not in standalone mode. I haven't seen it used outside YARN.
Note: As pointed out by Marco --num-executors
is no longer in use on YARN.
You can effectively control number of executors in standalone mode with static allocation (this works on Mesos as well) by combining spark.cores.max
and spark.executor.cores
where number of executors is determined as:
floor(spark.cores.max / spark.executor.cores)
For example:
--conf "spark.cores.max=4" --conf "spark.executor.cores=2"
来源:https://stackoverflow.com/questions/39399205/spark-standalone-number-executors-cores-control