arq
arq copied to clipboard
Manually retrying job from job result object
Hey Sam,
Would you be open to adding functionality to manually trigger a retry on a job from a job result object?
Something like this, currently requires manually specifying queue_name but we can get this from the job result if you are ok also allowing #161
async def retry_job(job, queue_name='process_legislation'):
# remove old job result
await arq.delete(result_key_prefix + job.job_id)
new_job = await arq.enqueue_job(
job.function,
*job.args,
_queue_name=queue_name,
_job_id=job.job_id,
_defer_by=5)
return new_job
Yes, where would this function go?
I was thinking it could also be a method on ArqRedis which accepts a JobResult object:
from arq.connections import create_pool
arq = await create_pool()
await arq.retry_job(result)
It would be nice to be able to call this directly from the result
object: await result.retry()
but that might get complicated since we would also have to keep track of which connection to use..
Do you have a preference?
Yes, it should be a function on ArqRedis
.
I can give this a shot as I need this functionality on a service I am writing, specifically I need to be able to retry an expired job so I think this will do what I need.