问题
I am using http://python-rq.org/ to queue and execute tasks on Heroku worker dynos. These are long-running tasks and occasionally I need to cancel them in mid-execution. How do I do that from Python?
from redis import Redis
from rq import Queue
from my_module import count_words_at_url
q = Queue(connection=Redis())
result = q.enqueue(
count_words_at_url, 'http://nvie.com')
and later in a separate process I want to do:
from redis import Redis
from rq import Queue
from my_module import count_words_at_url
q = Queue(connection=Redis())
result = q.revoke_all() # or something
Thanks!
回答1:
If you have the job instance at hand simply
job.cancel()
Or if you can determine the hash:
from rq import cancel_job
cancel_job('2eafc1e6-48c2-464b-a0ff-88fd199d039c')
http://python-rq.org/contrib/
But that just removes it from the queue; I don't know that it will kill it if already executing.
You could have it log the wall time then check itself periodically and raise an exception/self-destruct after a period of time.
For manual, ad-hoc style, death: If you have redis-cli
installed you can do something drastic like flushall queues and jobs:
$ redis-cli
127.0.0.1:6379> flushall
OK
127.0.0.1:6379> exit
I'm still digging around the documentation to try and find how to make a precision kill.
Not sure if that helps anyone since the question is already 18 months old.
回答2:
I think the most common solution is to have the worker spawn another thread/process to do the actual work, and then periodically check the job metadata. To kill the task, set a flag in the metadata and then have the worker kill the running thread/process.
来源:https://stackoverflow.com/questions/16793879/cancel-an-already-executing-task-in-python-rq