问题
Am I understanding the documentation for client mode correctly?
- client mode is opposed to cluster mode where the driver runs within the application master?
- In client mode the driver and application master are separate processes and therefore
spark.driver.memory
+spark.yarn.am.memory
must be less than the machine's memory? - In client mode is the driver memory is not included in the application master memory setting?
回答1:
client mode is opposed to cluster mode where the driver runs within the application master?
Yes, When Spark application deployed over YARN in
- Client mode, driver will be running in the machine where application got submitted and the machine has to be available in the network till the application completes.
- Cluster mode, driver will be running in application master(one per spark application) node and machine submitting the application need not to be in network after submission
Client mode
Cluster mode
If Spark application is submitted with cluster mode on its own resource manager(standalone) then the driver process will be in one of the worker nodes.
References for images and content:
- StackOverflow - Spark on yarn concept understanding
- Cloudera Blog - Apache Spark Resource Management and YARN App Models
In client mode the driver and application master are separate processes and therefore
spark.driver.memory
+spark.yarn.am.memory
must be less than the machine's memory?
No, In client mode, driver and AM are separate processes and exists in different machines, so memory need not to be combined but spark.yarn.am.memory
+ some overhead
should be less then YARN container memory(yarn.nodemanager.resource.memory-mb
). If it exceeds YARN's Resource Manager will kill the container.
In client mode is the driver memory is not included in the application master memory setting?
Here spark.driver.memory
must be less then the available memory in the machine from where the spark application is going to launch.
But, In cluster mode use
spark.driver.memory
instead ofspark.yarn.am.memory
.
spark.yarn.am.memory
: 512m (default)Amount of memory to use for the YARN Application Master in client mode, in the same format as JVM memory strings (e.g.
512m, 2g
). In cluster mode, usespark.driver.memory
instead. Use lower-case suffixes, e.g.k, m, g, t
, andp
, for kibi-, mebi-, gibi-, tebi-, and pebibytes, respectively.Check more about these properties here
回答2:
In client mode, the driver is launched directly within the spark-submit i.e client program. The application master to be created in any one of node in cluster. The spark.driver.memory (+ memory overhead) to be less than machine's memory.
In cluster mode, driver is running inside the application master in any of node in the cluster.
https://blog.cloudera.com/blog/2014/05/apache-spark-resource-management-and-yarn-app-models/
来源:https://stackoverflow.com/questions/50402020/spark-driver-memory-and-application-master-memory