How is virtual memory calculated in Spark?

旧城冷巷雨未停 提交于 2019-12-23 02:42:32

问题


I am using Spark on Hadoop and want to know how Spark allocates the virtual memory to executor.

As per YARN vmem-pmem, it gives 2.1 times virtual memory to the container.

Hence - if XMX is 1 GB then --> 1 GB * 2.1 = 2.1 GB is allocated to the container.

How does it work on Spark? And is the below statement is correct?

If I give Executor memory = 1 GB then,

Total virtual memory = 1 GB * 2.1 * spark.yarn.executor.memoryOverhead. Is this true?

If not, then how is virtual memory for an executor calculated in Spark?


回答1:


For Spark executor resources, yarn-client and yarn-cluster modes use the same configurations:

In spark-defaults.conf, spark.executor.memory is set to 2 GB.

I got this from: Resource Allocation Configuration for Spark on YARN



来源:https://stackoverflow.com/questions/40355716/how-is-virtual-memory-calculated-in-spark

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!