terminating a spark step in aws

拟墨画扇 提交于 2019-12-31 22:24:35

问题


I want to set up a series of spark steps on an EMR spark cluster, and terminate the current step if it's taking too long. However, when I ssh into the master node and run hadoop jobs -list, the master node seems to believe that there is no jobs running. I don't want to terminate the cluster, because doing so would force me to buy a whole new hour of whatever cluster I'm running. Can anyone please help me terminate a spark-step in EMR without terminating the entire cluster?


回答1:


That's easy:

yarn application -kill [application id]

you can list your running applications with

yarn application -list



回答2:


You can kill application from the Resource manager (in the links at the top right under cluster status). In the resource manager, click on the application you want to kill and in the application page there is a small "kill" label (top left) you can click to kill the application.

Obviously you can also SSH but this way I think is faster and easier for some users.



来源:https://stackoverflow.com/questions/35020029/terminating-a-spark-step-in-aws

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!