Skip to content

Instantly share code, notes, and snippets.

@odinserj
Last active April 11, 2025 20:48
Show Gist options
  • Save odinserj/a6ad7ba6686076c9b9b2e03fcf6bf74e to your computer and use it in GitHub Desktop.
Save odinserj/a6ad7ba6686076c9b9b2e03fcf6bf74e to your computer and use it in GitHub Desktop.
SkipWhenPreviousJobIsRunningAttribute.cs
// Zero-Clause BSD (more permissive than MIT, doesn't require copyright notice)
//
// Permission to use, copy, modify, and/or distribute this software for any purpose
// with or without fee is hereby granted.
//
// THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
// WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY
// AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT,
// INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS
// OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER
// TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF
// THIS SOFTWARE.
// Hangfire.Core 1.8+ is required, for previous versions please see revision from year 2022.
using System;
using System.Collections.Generic;
using Hangfire.Client;
using Hangfire.Common;
using Hangfire.States;
using Hangfire.Storage;
namespace ConsoleApp28
{
public class SkipWhenPreviousJobIsRunningAttribute : JobFilterAttribute, IClientFilter, IApplyStateFilter
{
public void OnCreating(CreatingContext context)
{
// We can't handle old storages
if (!(context.Connection is JobStorageConnection connection)) return;
// We should run this filter only for background jobs based on
// recurring ones
if (!context.Parameters.TryGetValue("RecurringJobId", out var parameter)) return;
var recurringJobId = parameter as string;
// RecurringJobId is malformed. This should not happen, but anyway.
if (String.IsNullOrWhiteSpace(recurringJobId)) return;
var running = connection.GetValueFromHash($"recurring-job:{recurringJobId}", "Running");
if ("yes".Equals(running, StringComparison.OrdinalIgnoreCase))
{
context.Canceled = true;
}
}
public void OnCreated(CreatedContext filterContext)
{
}
public void OnStateApplied(ApplyStateContext context, IWriteOnlyTransaction transaction)
{
if (context.NewState is EnqueuedState)
{
ChangeRunningState(context, "yes");
}
else if ((context.NewState.IsFinal && !FailedState.StateName.Equals(context.OldStateName, StringComparison.OrdinalIgnoreCase)) ||
(context.NewState is FailedState))
{
ChangeRunningState(context, "no");
}
}
public void OnStateUnapplied(ApplyStateContext context, IWriteOnlyTransaction transaction)
{
}
private static void ChangeRunningState(ApplyStateContext context, string state)
{
// We can't handle old storages
if (!(context.Connection is JobStorageConnection connection)) return;
// Obtaining a recurring job identifier
var recurringJobId = context.GetJobParameter<string>("RecurringJobId", allowStale: true);
if (String.IsNullOrWhiteSpace(recurringJobId)) return;
if (context.Storage.HasFeature(JobStorageFeatures.Transaction.AcquireDistributedLock))
{
// Acquire a lock in newer storages to avoid race conditions
((JobStorageTransaction)context.Transaction).AcquireDistributedLock(
$"lock:recurring-job:{recurringJobId}",
TimeSpan.FromSeconds(5));
}
// Checking whether recurring job exists
var recurringJob = connection.GetValueFromHash($"recurring-job:{recurringJobId}", "Job");
if (String.IsNullOrEmpty(recurringJob)) return;
// Changing the running state
context.Transaction.SetRangeInHash(
$"recurring-job:{recurringJobId}",
new[] { new KeyValuePair<string, string>("Running", state) });
}
}
}
@novacema
Copy link

novacema commented Apr 9, 2024

@frozzen10 The issue you're experiencing might be because the "Running" status of the job is not being reset properly when the job fails or when it's in a final state. This could cause the job to be immediately canceled on the next execution because the system thinks it's still running. To fix this, you should ensure that the "Running" status is reset in all cases when the job is in a final state, not just when it's not in a FailedState.

in OnStateApplied
`
var recurringJobId = SerializationHelper.Deserialize(
context.Connection.GetJobParameter(context.BackgroundJob.Id, "RecurringJobId"));

if (string.IsNullOrWhiteSpace(recurringJobId)) return;

if (context.NewState is EnqueuedState)
{
transaction.SetRangeInHash(
$"recurring-job:{recurringJobId}",
new[] {new KeyValuePair<string, string>("Running", "yes")});
}
else if (context.NewState.IsFinal)
{
transaction.SetRangeInHash(
$"recurring-job:{recurringJobId}",
new[] {new KeyValuePair<string, string>("Running", "no")});
}
`

@marsel-mo
Copy link

If you have this issue here is the solution for it.

After investigation, I found out that our customer SkipWhenPreviousJobIsRunningAttribute will trigger
OnStateApplied method after the job has been deleted.

The reason why it is doing that is this scenario :
The job will be triggered for running, during the running time and still not completed we delete the job!
On time the job is completed method OnStateApplied will be triggered even if the job is deleted and it will create a new row in the table hash.

If we want to keep the custom attribute SkipWhenPreviousJobIsRunningAttribute
Before adding the new row to hash table check if the job is deleted or not

var job = JobStorage.Current.GetConnection().GetRecurringJobs(new[] { recurringJobId }).FirstOrDefault();
 if(job is { Removed: true})  return;

 transaction.SetRangeInHash(
     $"recurring-job:{recurringJobId}",
     new[] { new KeyValuePair<string, string>(RunningKey, "no") });

The benefit of this approach is that it will save us of creating unneeded data in the DB.
Unfortunately, it will increase the load on the DB since it is one more additional request.

@odinserj
Copy link
Author

Thanks for handling this. I have updated the gist with new methods available in Hangfire 1.8+ to avoid any race conditions. They work by acquiring a lock in the same transaction, and checks existence of a recurring job first. So now should be no troubles running this script even if everything is going wrong.

@sunnamed434
Copy link

Thank you a lot, saved me against PostgreSqlDistributedLockException

@Rayzbam
Copy link

Rayzbam commented Feb 25, 2025

@frozzen10 The issue you're experiencing might be because the "Running" status of the job is not being reset properly when the job fails or when it's in a final state. This could cause the job to be immediately canceled on the next execution because the system thinks it's still running. To fix this, you should ensure that the "Running" status is reset in all cases when the job is in a final state, not just when it's not in a FailedState.

in OnStateApplied ` var recurringJobId = SerializationHelper.Deserialize( context.Connection.GetJobParameter(context.BackgroundJob.Id, "RecurringJobId"));

if (string.IsNullOrWhiteSpace(recurringJobId)) return;

if (context.NewState is EnqueuedState) { transaction.SetRangeInHash( $"recurring-job:{recurringJobId}", new[] {new KeyValuePair&lt;string, string&gt;("Running", "yes")}); } else if (context.NewState.IsFinal) { transaction.SetRangeInHash( $"recurring-job:{recurringJobId}", new[] {new KeyValuePair<string, string>("Running", "no")}); } `

Watch out, "Failed state" is not a final state, if you disabled automaticRetry with no delete you'll never have a "DeletedState".
According to your case, please add

context.NewState.IsFinal || context.NewState is FailedState

@Rookian
Copy link

Rookian commented Feb 26, 2025

Is there a new fixed version for this attribute? Why is it not part of Hangfire?

@fradzano
Copy link

fradzano commented Mar 12, 2025

This filter is incredibly helpful, especially for recurring jobs prone to transient errors. We've implemented it for our data import jobs, which poll a somewhat unstable service every 5 minutes.

Previously, we tried:

  • DisableConcurrentExecutionAttribute, which worked as expected to prevent concurrent executions.
  • AutomaticRetryAttribute(0, Fail), but this wasn't ideal because we rely on monitoring failed jobs after a set number of retries.
  • Allowing the 5-minute jobs to flood the queue and retry for up to half an hour, which overloaded the target service.

This filter solved our problem perfectly. It prevents new job triggers based on the interval while a previous instance is still running, even if it's failing multiple times.

It would be fantastic to see this included in the core package!

@odinserj
Copy link
Author

Jokes apart, I can't pick a meaningful short name for this filter that defines its behavior and tells us it works only with recurring jobs. This is now the only thing that prevents from including it to Hangfire.Core 🤦‍♂️.

@co-dax
Copy link

co-dax commented Mar 14, 2025

@odinserj SerializedRecurringJobAttribute which would actually be used as just [SerializedRecurringJob] in the code using that attribute. The same term and concept are being used for Serializable transaction isolation levels in database management systems and it pretty much refers to the analogous context. See https://en.wikipedia.org/wiki/Isolation_(database_systems)#Serializable where it states:

A serial execution is one in which each SQL-transaction executes to completion before the next SQL-transaction begins.

@fradzano
Copy link

@odinserj, if i may recommend: SingleInstanceRecurringJobAttribute

@Rookian
Copy link

Rookian commented Mar 17, 2025

Am I right, that this attribute would not prevent a recurring job to be run when the job was already manually triggered by a user?

@sunnamed434
Copy link

In my case I've had a problem when a concurrent job was stuck in kind of "loop" (so sometimes the job was stuck and didn't run at all for hours,days..), that was because I was making a lot "Task.Run(...)" in other services which are not related to Hangfire, so I simply moved most of my Task.Run to System.Threading.Channels.Channel and all works well now

@david-alonso-su
Copy link

It's works perfect. Many thanks.

But it not work if you place the attribute in a interface.

This is OK

[SkipWhenPreviousJobIsRunning]
public class JobWithITaskDelay90Sec : ITask<bool>

This not working.

public class JobWithITaskDelay90Sec : ITask<bool>
{
}

[SkipWhenPreviousJobIsRunning]
public interface ITask<TResult>
{
}

@sven-neubert-syzygy
Copy link

Thank you for providing this code. It perfectly matches the functionality I was looking for.

Unfortunately, it doesn't work for me. Initially, it seemed to work fine on my local setup with a single Hangfire instance, although I didn't test it for very long.

Now that I've deployed it to four test instances, I'm seeing multiple instances of the same RecurringJob running simultaneously. I created a dummy job with Task.Delay for 10 minutes, which starts every minute. It ran overnight, and the test instances were restarted by IIS at some point during the night. Now, I have multiple instances of this job running concurrently.

@ejk34
Copy link

ejk34 commented Apr 11, 2025

I'm curious about how this differs from Hangfire Ace's concurrency and throttling via mutex?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment