Hello,
I have the following scenario and need your help:
We have a certain amount of jobs in the queue and sometimes people want to stop/erase an ongoing job with a click of a button.
Therefore: Is it possible to stop a scan from "outside"? For example with raising an exception (#1313)?
I already tried it with the "delete()" method - but it does not really stop the job.
Here's the corresponding snipped of the code i tried (which is located in a flask route):
# cancels job in queue
try:
job = Job.fetch(id=hash, connection=r)
job.delete()
except NoSuchJobError:
logging.warning(f"No Job with id {hash} to delete.")
I'm thankful for every tip 馃槃
The job.delete()
removes the job from the Redis queue and from all the job entries (FinishedJobRegistry, DeferredJobRegistry, StartedJobRegistry, ScheduledJobRegistry and failed_job_registry).
If the job is not in the running state and is scheduled job then job.delete()
should work.
The job is executed by the python interpreter so raising an exception to terminates the job.
@Angi2412 to stop an RQ job, you'll need to send SIGINT signal twice to the worker. This is on my to do list.
@selwin
How can I do that from outside? Do I understand correctly, that I have to write/implement a different Handler?
Because I would need the PID (which would be hard to get) from the worker in order to send the SIGINT
signal from another process, right?
Yes, we'll need to implement a different signal handler. When the main worker process receives that signal from the outside, it calls kill_horse() and proceeds as though the job fails.
Fixed in https://github.com/rq/rq/pull/1363/files . You can use send_kill_horse_command()
to tell a worker to stop a job it's currently working on.
Most helpful comment
Fixed in https://github.com/rq/rq/pull/1363/files . You can use
send_kill_horse_command()
to tell a worker to stop a job it's currently working on.