bull
bull copied to clipboard
Clean way to update repeatable job data?
The use case would be to update the data for the next run of a job in an atomic fashion, which currently seems to only be possible by using removeRepeatableJob and re-adding the repeatable job, and only while the job is not actively running.
I will mark this issue as a future enhancement.
I also need this. Has anyone found a way to update a repeatable job's data during processing?
This is a really needed feature.
The documentation was not clear to me, and I was adding the job every time my node runs causing multiple jobs to run in every npm start
.
I cannot use the @gmcnaught workaround because I don't allways know what is the old value for RepeatOpts
. So I'm cleanning the queue before adding it in again, and using a jobId
(which I have defined in a file of constants jobs_constants
) to avoid duplicated queues:
import Queue from 'bull';
import { REMINDER_FOR_TRACKING_HOURS } from '../config/jobs_constants';
module.exports = (RedisUrl, RedisOptions) => {
// Here is where I build the `queue` object and write all the jobs specifications.
queue.empty();
queue.add(null, {
jobId: REMINDER_FOR_TRACKING_HOURS,
repeat: { cron: '0 10 26-31 * *' }
});
}
A addOrUpdate
method would be really useful. Maybe with required jobId
or some unique identifier.
I will look into this as soon as I have a free slot.
Any update on this?
Having a job.update()
working for repeatable would be really nice. I expose jobs to a graphQL but without any data in them ... :/
I also need to update repeatable jobs.
I think I've mentioned this in another thread:
We augmented Bull by storing job Data for repeatable jobs in a hash by jobName, that way we can atomically update the hash with the new job data, and then the first step in job.process pulls the hash data and updates the current job, so future jobs will have this data.
We've explored adding this in to the redis/lua automation to integrate it into the system directly and to reduce the number of network calls, but its not been a high priority. We have also considered handling delete in the same way - creating a queue of repeatable jobs that need to be deleted, and asynchronously deleting them on their next scheduled run.
Thanks @gmcnaught!
I was trying to keep all the required information to run a job in its data, but it turns out I need to store it somewhere on the side.
I've faced the same need. It would be awesome if we had an easiest way to perform updates on repeatable jobs.
Any update on this? Does job.update() work for repeatable jobs as well?
This feature would be very useful for the cron jobs where we need to query something since the last task invocation, it's a pity that this doesn't work by default
@manast Will this be implemented in BullMQ as well?
Any progress on this issue?
Any progress on this issue?
No
5 years since someone asked for it, it would be great to do it
In case anyone would find it useful:
export async function updateJobsOfQueueWithNewPattern(queue: Queue, pattern: string | undefined) {
if (pattern) {
log.info(`Update jobs of queue ${queue.name} with new pattern ${pattern}`);
const connection = await queue.client;
const keys = await connection.keys(`bull:${queue.name}:repeat:*:*`);
const jobs = (await queue.getRepeatableJobs()).filter((job) => job.pattern !== pattern);
for (const job of jobs) {
log.info(`job: ${JSON.stringify(job)}`);
let redisData;
for (const key of keys) {
const hash = await connection.hgetall(key);
if (hash.name === job.name) {
redisData = hash.data;
break;
}
}
if (redisData) {
log.info(`Update job ${job.name} with new pattern ${pattern}`);
const options = { repeat: { pattern }, jobId: job.id };
await queue.removeRepeatableByKey(job.key);
await addJob(job.name, queue, JSON.parse(redisData), options, false);
}
}
}
}
A bit dirty but does what I wanted
Also would love this feature :)
It has been 7 years and this problem has not been solved. I want to update it by deleting and then creating, but I cannot obtain the old data through getRepeatableJobs. Please update it @manast