What is the relationship between workers, worker instances, and executors?

北城以北 提交于 2019-11-28 03:03:42

I suggest reading the Spark cluster docs first, but even more so this Cloudera blog post explaining these modes.

Your first question depends on what you mean by 'instances'. A node is a machine, and there's not a good reason to run more than one worker per machine. So two worker nodes typically means two machines, each a Spark worker.

Workers hold many executors, for many applications. One application has executors on many workers.

Your third question is not clear.

mrsrinivas

Extending to other great answers, I would like describe with few images.

In Spark Standalone mode, there are master node and worker nodes.

If we represent both master and workers at one place for standalone mode.

If you are curious about how Spark works with YARN? check this post Spark on YARN

1. Does 2 worker instance mean one worker node with 2 worker processes?

In general we call worker instance as slave as it's a process to execute spark tasks/jobs. Suggested mapping for node(a physical or virtual machine) and worker is,

1 Node = 1 Worker process

2. Does every worker instance hold an executor for specific application (which manages storage, task) or one worker node holds one executor?

Yes, A worker node can be holding multiple executors (processes) if it has sufficient CPU, Memory and Storage.

Check the Worker node in the given image.

BTW, Number of executors in a worker node at a given point of time is entirely depends on work load on the cluster and capability of the node to run how many executors.

3. Is there a flow chart explain how spark runtime?

If we look the execution from Spark prospective over any resource manager for a program, which join two rdds and do some reduce operation then filter

HIH

I know this is an old question and Sean's answer was excellent. My writeup is about the SPARK_WORKER_INSTANCES in MrQuestion's comment. If you use Mesos or YARN as your cluster manager, you are able to run multiple executors on the same machine with one worker, thus there is really no need to run multiple workers per machine. However, if you use standalone cluster manager, currently it still only allows one executor per worker process on each physical machine. Thus in case you have a super large machine and would like to run multiple exectuors on it, you have to start more than 1 worker process. That's what SPARK_WORKER_INSTANCES in the spark-env.sh is for. The default value is 1. If you do use this setting, make sure you set SPARK_WORKER_CORES explicitly to limit the cores per worker, or else each worker will try to use all the cores.

This standalone cluster manager limitation should go away soon. According to this SPARK-1706, this issue will be fixed and released in Spark 1.4.

As Lan was saying, the use of multiple worker instances is only relevant in standalone mode. There are two reasons why you want to have multiple instances: (1) garbage pauses collector can hurt throughput for large JVMs (2) Heap size of >32 GB can’t use CompressedOoops

Read more about how to set up multiple worker instances.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!