We use a Spark cluster as yarn-client to calculate several business, but sometimes we have a task run too long time:
yarn-client
We don\'t set timeout but I th
The trick here is to login directly to the worker node and kill the process. Usually you can find the offending process with a combination of top, ps, and grep. Then just do a kill pid.
top
ps
grep
kill pid