Spark workers stopped after driver commanded a shutdown

社会主义新天地 提交于 2019-12-10 06:39:55

问题


Basically, Master node also perform as a one of the slave. Once slave on master completed it called the SparkContext to stop and hence this command propagate to all the slaves which stop the execution in mid of the processing.

Error log in one of the worker:

INFO SparkHadoopMapRedUtil: attempt_201612061001_0008_m_000005_18112: Committed

INFO Executor: Finished task 5.0 in stage 8.0 (TID 18112). 2536 bytes result sent to driver

INFO CoarseGrainedExecutorBackend: Driver commanded a shutdown

ERROR CoarseGrainedExecutorBackend: RECEIVED SIGNAL TERMtdown


回答1:


Check your resource manager user interface, in case you see any executor failed - it details about memory error. However if executor has not failed but still driver called for shut down - usually this is due to driver memory, please try to increase driver memory. Let me know how it goes.



来源:https://stackoverflow.com/questions/40993104/spark-workers-stopped-after-driver-commanded-a-shutdown

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!