lockable-resources-plugin icon indicating copy to clipboard operation
lockable-resources-plugin copied to clipboard

Unavailable Resource Sends Build back to Queue

Open epiq-ben opened this issue 1 year ago • 4 comments

What feature do you want to see added?

This feature would be off by default. If configured on, this feature would attempt to lock a resource. If the resource is busy the build would be added to the resource queue and would be kicked back out to the second position in the build queue allowing the next build to be sent to the now available executor. Once this build is in the resource queue, before leaving the build queue it would check to see if the resource is still locked. If it is still locked then the build is moved to second in the queue allowing the next build to be sent to the available executor. This would prevent multiple jobs that need the same limited resource from taking all available executors and then spinning while waiting for the resource to be available.

Upstream changes

No response

Are you interested in contributing this feature?

Yes, I would be more than happy to contribute to this issue. This would be my first time contributing and I am not exactly sure where to start with it.

epiq-ben avatar Aug 30 '24 02:08 epiq-ben

@epiq-ben It looks like this, https://github.com/jenkinsci/lockable-resources-plugin?tab=readme-ov-file#take-first-position-in-queue or this https://github.com/jenkinsci/lockable-resources-plugin?tab=readme-ov-file#lock-queue-priority

Do you want still to contirbute? You can use this link and assign a issue to you https://github.com/jenkinsci/lockable-resources-plugin/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22

mPokornyETM avatar Sep 15 '24 20:09 mPokornyETM

@mPokornyETM I don't think that is what @epiq-ben meant. I think he wants the same feature that I am looking for:

When I have an agent with 5 executors where 1 is running and the 4 others are waiting for a resource locked by the 1st job, all 4 waiting jobs should be removed from the executors and put back in the queue. This way other jobs that may have been waiting to start can actually run instead of everybody waiting for a single job.

We are having this issue, where eg in a multi-branch job several of our executors are blocked with waiting for a lockable resource. If we could somehow move these jobs back in the queue (or maybe some special queue, since the jobs might have to go back to the exact same node) we could actually use the agent to the fullest instead of just wasting the executors by being idle.

malice00 avatar Nov 12 '24 14:11 malice00

I'm running into this as well. I have an execution node with several execution slots but what I'm finding is first job takes the lock, another job needing that same lock is blocked but taking up an execution slot that could be used by another job that doesn't require the lock. I'm using declarative pipelines. Is there any way to keep this from happening or this requires enhancements to the plugin somehow?

#93 also appears relevant, if not a duplicate.

corey-kdm avatar Jun 06 '25 21:06 corey-kdm

Nevermind, I was able to resolve this. I'll post here in case this helps someone else. It would be nice if the documentation was more clear as this is an area of Jenkins that I don't find intuitive.

I was using this general structure in my declarative pipeline:

pipeline {

  agent {
    label 'some-label'
  }
  options {
    lock('end-to-end-test-resource')
  }

  stages {
    stage('test stuff') {
      steps {
        // ...        
      }
    }
  }
}

The above setup caused executor starvation if multiple jobs were blocked on the lock.

To avoid this, I found I had to nest my stages:

pipeline {
  // don't allocate an agent to evaluate the pipeline
  agent none

  stages {
    stage('resource-constrained stages') {
      agent {
        label 'some-label'
      }
      options {
        lock('end-to-end-test-resource')
      }
      // BEGIN nested stages
      stages {
        stage('test stuff') {
          steps {
            // ...
          }
        }
      }
      // END nested stages
    }
  }
}

With this setup jobs that are blocked on the lock don't take up an executor. Somewhat disconcertingly they disappear from the queue altogether (see #102) although you can see them under "Lockable Resources | Queue".

corey-kdm avatar Jun 07 '25 00:06 corey-kdm