I see exit codes and exit statuses all the time when running spark on yarn:
Here are a few:
CoarseGrainedExecutorBackend: RECEIVED SIGNAL 15:
Neither exit codes and status nor signals are Spark specific but part of the way processes work on Unix-like systems.
Exit status and exit codes are different names for the same thing. An exit status is a number between 0 and 255 which indicates the outcome of a process after it terminated. Exit status 0 usually indicates success. The meaning of the other codes is program dependent and should be described in the program's documentation. There are some established standard codes, though. See this answer for a comprehensive list.
In the Spark sources I found the following exit codes. Their descriptions are taken from log statements and comments in the code and from my understanding of the code where the exit status appeared.
stdout
and stderr
streams.spark.yarn.scheduler.reporterThread.maxFailures
executor failures occurredEXIT_SECURITY
but never usedThe default state of
ApplicationMaster
is failed if it is invoked by shut down hook. This behavior is different compared to 1.x version. If user application is exited ahead of time by callingSystem.exit(N)
, here mark this application as failed withEXIT_EARLY
. For a good shutdown, user shouldn't callSystem.exit(0)
to terminate the application.
56: Executor is unable to send heartbeats to the driver more than "spark.executor.heartbeat.maxFailures" times.
101: Returned by spark-submit if the child main class was not found. In client mode (command line option --deploy-mode client
) the child main class is the user submitted application class (--class CLASS
). In cluster mode (--deploy-mode cluster
) the child main class is the cluster manager specific submission/client class.
These exit codes most likely result from a program shutdown triggered by a Unix signal. The signal number can be calculated by substracting 128 from the exit code. This is explained in more details in this blog post (which was originally linked in this question). There is also a good answer explaining JVM-generated exit codes. Spark works with this assumption as explained in a comment in ExecutorExitCodes.scala
Apart from the exit codes listed above there are number of System.exit()
calls in the Spark sources setting 1 or -1 as exit code. As far as I an tell -1 seems to be used to indicate missing or incorrect command line parameters while 1 indicates all other errors.
Signals are a kind of events which allow to send system messages to a process. These messages are used to ask a process to reload its configuration (SIGHUP
) or to terminate itself (SIGKILL
), for instance. A list of standard signals can be found in the signal(7) man page in section Standard Signals.
As explained by Rick Moritz in the comments below (thank you!), the most likely sources of signals in a Spark setup are
I hope this makes it a bit clearer what these messages by spark might mean.