问题
I am using Spark on Hadoop and want to know how Spark allocates the virtual memory to executor.
As per YARN vmem-pmem, it gives 2.1 times virtual memory to the container.
Hence - if XMX is 1 GB then --> 1 GB * 2.1 = 2.1 GB is allocated to the container.
How does it work on Spark? And is the below statement is correct?
If I give Executor memory = 1 GB then,
Total virtual memory = 1 GB * 2.1 * spark.yarn.executor.memoryOverhead. Is this true?
If not, then how is virtual memory for an executor calculated in Spark?
回答1:
For Spark executor resources, yarn-client and yarn-cluster modes use the same configurations:
In spark-defaults.conf, spark.executor.memory is set to 2 GB.
I got this from: Resource Allocation Configuration for Spark on YARN
来源:https://stackoverflow.com/questions/40355716/how-is-virtual-memory-calculated-in-spark