问题
I want to set up a series of spark steps on an EMR spark cluster, and terminate the current step if it's taking too long. However, when I ssh into the master node and run hadoop jobs -list, the master node seems to believe that there is no jobs running. I don't want to terminate the cluster, because doing so would force me to buy a whole new hour of whatever cluster I'm running. Can anyone please help me terminate a spark-step in EMR without terminating the entire cluster?
回答1:
That's easy:
yarn application -kill [application id]
you can list your running applications with
yarn application -list
回答2:
You can kill application from the Resource manager (in the links at the top right under cluster status). In the resource manager, click on the application you want to kill and in the application page there is a small "kill" label (top left) you can click to kill the application.
Obviously you can also SSH but this way I think is faster and easier for some users.
来源:https://stackoverflow.com/questions/35020029/terminating-a-spark-step-in-aws