I did my work, read the documentation at https://spark.apache.org/docs/latest/configuration.html
in spark-folder/conf/spark-env.sh:
First, you should know that 1 Worker (you can say 1 machine or 1 Worker Node) can launch multiple Executors (or multiple Worker Instances - the term they use in the docs).
SPARK_WORKER_MEMORY
is only used in standalone deploy modeSPARK_EXECUTOR_MEMORY
is used in YARN deploy modeIn Standalone mode, you set SPARK_WORKER_MEMORY
to the total amount of memory can be used on one machine (All Executors on this machine) to run your spark applications.
In contrast, In YARN mode, you set SPARK_DRIVER_MEMORY
to the memory of one Executor
SPARK_DRIVER_MEMORY
is used in YARN deploy mode, specifying the memory for the Driver that runs your application & communicates with Cluster Manager.