Hangfire
Hangfire copied to clipboard
Is there a way to cancel all pending jobs in a named queue?
Is there a way to cancel all pending jobs in a named queue?
So I don't know if this is the best way to do this, but I found two ways of doing this.
The first one is pretty slow:
var monitoringApi = JobStorage.Current.GetMonitoringApi();
var queues = monitoringApi.Queues();
var toDelete = new List<string>();
for (var i = 0; i < Math.Ceiling((double)(queue.Length/1000)) ; i++) {
monitoringApi.EnqueuedJobs(queue.Name, 1000 * i, 1000).ForEach(x => toDelete.Add(x.Key));
}
foreach (var jobId in toDelete) {
BackgroundJob.Delete(jobId);
}
This one is much faster, but feels kind of hacky and doesn't leave a job as deleted.
db.Database.ExecuteSqlCommand(string.Format(@"DELETE j FROM HangFire.Job AS j
LEFT JOIN HangFire.JobQueue AS jq ON jq.JobId=j.Id
WHERE jq.[Queue]='{0}' AND (StateName='Enqueued');
DELETE FROM HangFire.JobQueue WHERE [Queue]='{0}';", queue.Name));
Great tip, thanks. To save future googlers some time,
var monitor = JobStorage.Current.GetMonitoringApi();
var queues = monitor.Queues();
@danodonovan Updated my little example to include that. Sorry if I made you search for that...
Throwing it all together
public static class HangfireExtensions
{
public static void PurgeJobs(this IMonitoringApi monitor)
{
var toDelete = new List<string>();
foreach (QueueWithTopEnqueuedJobsDto queue in monitor.Queues())
{
for (var i = 0; i < Math.Ceiling(queue.Length / 1000d); i++)
{
monitor.EnqueuedJobs(queue.Name, 1000 * i, 1000)
.ForEach(x => toDelete.Add(x.Key));
}
}
foreach (var jobId in toDelete)
{
BackgroundJob.Delete(jobId);
}
}
}
Called with:
JobStorage.Current?.GetMonitoringApi()?.PurgeJobs();
@JustMaier Is this still the best way to do this?
@markalanevans as far as I know. I never got a different response or saw anything different...
@JustMaier So this looks like it would delete all jobs in all queues. Ideally we could specify a queue name right?
You can modify @JamieG's code to purge a queue by name. Just FYI - from my experience, the SQL query is actually way more performant if you have more than 100 or so items in the queue.
Right. Below is what i ended up with. It is very slow, and while it does work, you do have to have a worker running and monitoring each queue otherwise the jobs state changes to deleted but they don't actually get moved into the "Deleted" bucket in the Dashboard.
I wanted to use the methods because otherwise when we switch to redis it won't work. @odinserj is there a faster way to do this while still using the methods provided?
public class JobService : IJobService
{
public void PurgeAllQueues()
{
var monitor = JobStorage.Current.GetMonitoringApi();
foreach (QueueWithTopEnqueuedJobsDto queue in monitor.Queues())
{
PurgeQueue(queue.Name);
}
}
public void PurgeQueue(string queueName)
{
var toDelete = new List<string>();
var monitor = JobStorage.Current.GetMonitoringApi();
var queue = monitor.Queues().First(x => x.Name == queueName);
for (var i = 0; i < Math.Ceiling(queue.Length / 1000d); i++)
{
monitor.EnqueuedJobs(queue.Name, 1000 * i, 1000)
.ForEach(x => toDelete.Add(x.Key));
}
foreach (var jobId in toDelete)
{
BackgroundJob.Delete(jobId);
}
}
}
I thought the proposed code above will reset the recurring jobs too, but it didn't So, I wrote my own method to reset recurring jobs at app Start, this is
public void ResetRecurringJobs(List<string> newJobIds)
{
using (var connection = JobStorage.Current.GetConnection())
{
var setKey = "recurring-jobs";
var savedJobIds = connection.GetAllItemsFromSet(setKey);
var missingJobsIds = savedJobIds.Except(newJobIds).ToList();
foreach (var jobId in missingJobsIds)
{
RecurringJob.RemoveIfExists(jobId);
}
}
}
is JobStorage.Current.GetMonitoringApi(); designed for designing custom job monitoring portals? I am thinking of having my own portal for the same. Is this the right implementation to go with or there is any other alternate method that serves me data in better way?
Can someone clarify the 'slowness' part?
I would imagine the cancellation and thus deletion has to go through the cancellation tokens. So if you are using cancellation tokens on all your hangfire calls:
BackgroundJob.Enqueue(() => ProcessRequestWorker(JobCancellationToken.Null, brokerRequest));
Is it faster or it doesn't make any difference?
And another thing, I suppose the polling intervals that the job servers use play a part on this process as well. If you use longer intervals for polls then it would obviously take longer to delete the jobs.
@odinserj Thoughts on this? It would be really nice to have a button next to any queue with jobs in it and the ability to delete all jobs.
Just using a direct SQL would be fastest wouldn't it ?
DELETE FROM [DatabaseName].[HangFire].[JobQueue]
@zerokewl88 if you're using sql-storage, yes. Not everyone is using sql storage :)
@rahulrulez if what you want to build exposes the same data as the existing dashboard, then you can use the MonitoringApi for sure. The existing dashboard solely relies on this.
DELETE FROM [DatabaseName].[HangFire].[JobQueue] WHERE QUEUE = 'your queue name' DELETE FROM [DatabaseName].[HangFire].Job where statename <> 'Enqueued'
@odinserj Any feedback here? What do you suggest for clearing a queue. BackgroundJob.EmptyQueue("queue-name") and make it fast?
So i figured i'd contribute here as this was still somewhat of a pain point for me. In my situation we have a dev box where we don't care about the state of the redis database, we just want to wipe queued and recurring jobs so we don't get spammed a million times. We are using hangfire.aspnetcore. Simply doing:
await db.KeyDeleteAsync("myPrefix:Hangfireschedule");
await db.KeyDeleteAsync("myPrefix:Hangfirerecurring-jobs");
Did the trick for us. When a new job gets added, or recurring job scheduled, the zset is recreated properly and there are no issues. Hope this helps someone. (do not use this in prod).
This works for my needs
var mon = JobStorage.Current.GetMonitoringApi();
mon.EnqueuedJobs("printer",0,99999999).ForEach(x => {
BackgroundJob.Delete(x.Key);
});
searched for a minute but no solution worked the way i needed it. That there killed all fire and forget jobs which is what I needed.
Addition on Redis, if you are just developing and getting errors because methods don't exist anymore deleting all hangfire entries in with redis-cli works too:
redis-cli KEYS "{hangfire*" | xargs redis-cli DEL
One way to magically watch your queued jobs disappear is to add a [LatencyTimeout] attribute to the job. In this case jobs queued longer than 2 minutes will get deleted and neatly marked with Background job has exceeded latency timeout of 120 second(s)
[Hangfire.LatencyTimeout(60 * 2)]
In my case I had a named queue that wasn't active, so a bunch of jobs accumulated - for days! Adding this attribute made them all just disappear before my eyes in the console.
For 'background jobs' that aren't transactional in nature it looks like it's important to add this LatencyTimeout to make sure the job doesn't run more than it should.
Yes you can add this to an existing job.
DELETE FROM [DatabaseName].[HangFire].[JobQueue] WHERE QUEUE = 'your queue name' DELETE FROM [DatabaseName].[HangFire].Job where statename <> 'Enqueued'
where statename = 'Enqueued'?
So I think you need to delete the Jobs, and then the JobQueues - I can't find this in the documentation but it works for me. First find what your queue is called by exploring the Hangfire.Job table, then run this pair of queries after adding your queue name where indicated:
DECLARE @Queue varchar(40) = 'j_queue_medium' -- CHANGE THIS QUEUE NAME AS REQUIRED
DELETE HangFire.Job FROM HangFire.Job INNER JOIN HangFire.JobQueue ON HangFire.Job.Id = HangFire.JobQueue.JobId WHERE (HangFire.JobQueue.Queue = @Queue)
DELETE FROM [HangFire].[JobQueue] WHERE QUEUE = @Queue
So I don't know if this is the best way to do this, but I found two ways of doing this.
The first one is pretty slow:
var monitoringApi = JobStorage.Current.GetMonitoringApi(); var queues = monitoringApi.Queues(); var toDelete = new List<string>(); for (var i = 0; i < Math.Ceiling((double)(queue.Length/1000)) ; i++) { monitoringApi.EnqueuedJobs(queue.Name, 1000 * i, 1000).ForEach(x => toDelete.Add(x.Key)); } foreach (var jobId in toDelete) { BackgroundJob.Delete(jobId); }This one is much faster, but feels kind of hacky and doesn't leave a job as deleted.
db.Database.ExecuteSqlCommand(string.Format(@"DELETE j FROM HangFire.Job AS j LEFT JOIN HangFire.JobQueue AS jq ON jq.JobId=j.Id WHERE jq.[Queue]='{0}' AND (StateName='Enqueued'); DELETE FROM HangFire.JobQueue WHERE [Queue]='{0}';", queue.Name));
I am using hangfire 1.7.6 Enqued and Processing jobs from any unexpected shutdown causing an issue So trying deleting them using the above method Creating a separate recurring job just before my actual job to clear the jobs with processing and pending Issue I am facing here using the Monitoring APi is I am not getting any Queues monitorapi.Queues() =0 With second option issues is JobQueue table is locked and Does not return the control back from the program
@odinserj is this still the best way to handle this nowadays?
It is unsafe to run any custom queries against tables in Hangfire – it doesn't expects such an uncontrolled thing and some features can be broken, like continuations or batches for example. Hangfire should be able to handle completion of each individual background job to preserve its "at least once" guarantees.
The best way to handle this feature is to design some job filter that's working the same way as the LatencyTimeoutAttribute referenced above, but instead of comparing some dates it should compare some global flag either in Hangfire's job storage or in somewhere else.
@odinserj Once job is in Processing state, when will the OnStateElection method be invoked again? I was hoping LatencyTimeoutAttribute could mark as Deleted all the jobs that were in processing state when server unexpectedly shut down for an example.
After unexpected shutdown, sooner or later the job is re-queued. Once some another worker picks it up for processing, eject state filters are invoked again for Processing → Processing state change. So latency filter will delete such a job.
The first one is pretty slow:
Right. Below is what i ended up with. It is very slow.
Can someone clarify the 'slowness' part?
I've inherited a system which uses the GetMonitoringApi() versions of the code above to reset tasks on AppStartup. (So thank you, from my predecessors, to those who worked out the solutions above :D )
We've recently started seeing massive hang times on startup of secondary servers when running this code - as in 5-20 minutes! So far it's only ever occurred on the secondary server - i.e. it seems to happen when one server running Hangfire setup is already active, and a second server tries to start up, sharing the same database. We haven't tried reversing the start-up sequence to verify that it's ordinal, not hardware, (yet) but I'd be surprised if it didn't show what I expected.
Questions:
- When people talked about slowness, what sort of scale of slowness were they talking about? This same scale, or have I got something drastically outside the ordinary?
- Does anyone know why it might only affect the 2nd server to start running?
- Does anyone already know exactly which line of the code is slow? There's a hypothesis offered above that it's the
Delete()but no active confirmation of that.
I've added logging around each line of the process so I'll find out myself the next time it happens, and report back.
My first thought to fix is to try to make the cleanup asynchronous ... using Hangfire itself?
i.e. collect all JobIds to be deleted (Eagerly) and then loop through them, but instead of doing
BackgroundJob.Delete(jobId);
actually do
BackgroundJob.Enqueue(() =>BackgroundJob.Delete(jobId));
would that ... work?
It seems ... idk ... somehow a little too self-referential to be entirely safe.
@odinserj , do you have any thoughts about whether this is an actively dangerous idea?
(Not sure I have TechDebt budget to implement proper LatencyTimeout-derived solution that you describe above.)