lockable-resources-plugin icon indicating copy to clipboard operation
lockable-resources-plugin copied to clipboard

[JENKINS-43336] Add getLock/releaseLock steps.

Open abayer opened this issue 7 years ago • 27 comments

JENKINS-43336

This is specifically to support usage of locking across stages in Declarative. The use case is something like this:

stage('foo') {
  steps {
    getLock resource:'some-resource'
    // do some other stuff
  }
}

stage('bar') {
  steps {
    // do some other stuff
    releaseLock()
  }
}
  • [x] Implementation.
  • [ ] Tests (both of getLock/releaseLock and interoperability between them, the block-scoped lock step, and locking freestyle builds).
  • [ ] Snippet generator/help.html.

cc @reviewbybees and @jglick (who I'd like to assign as a reviewer but can't because he's not a contributor to the repo!)

abayer avatar Jun 30 '17 14:06 abayer

The use case is something like this

But that is something you should never do! Because then you are not guaranteeing the lock will ever be released.

I feel that the block-scoped lock step is the only appropriate primitive, and if Declarative does not currently support that, then that is an RFE for Declarative. For example, it could be an addition to an options block. Or the syntax could extended to allow wrapper steps to be used outside stages.

jglick avatar Jun 30 '17 15:06 jglick

@jglick I get what you're saying. You can do lock(...) in options already, but that applies across the whole build. I suppose I can try to find a way to do options-but-for-subsets-of-stages...

EDIT: And, inspired by @dolphy01's comment above - maybe this approach would make sense with a RunListener that automatically clears all locks for a Run upon completion?

abayer avatar Jun 30 '17 15:06 abayer

I did not realize that lock() could be performed in options to lock the whole build. That actually satisfies some of my use cases. I think it would still be useful to have separate control over lock/unlock, but I agree that it would have to be done in such a way that the deadlock potential would be 0.

I'm (obviously) not familiar enough with the pipeline construct, is there a way to automatically force all locks acquired in a job to be released when that job exits?

dolphy01 avatar Jun 30 '17 15:06 dolphy01

@dolphy01 When using the lock(...) {...} step, all locks will automatically be released when either the block is completed or the build fails/is killed/whatever.

abayer avatar Jun 30 '17 16:06 abayer

@abayer sorry, I was ambiguous there. I meant in the context of using getLock():

pipeline {
    stages {
        stage('locking') {
            steps {
                getLock resource: "my_resource"
            }
        }
    } <-- Is there a way to force the release of anything aquired with "getLock" here?
} <-- or here?

dolphy01 avatar Jun 30 '17 16:06 dolphy01

Or, another way of asking, is it a necessity that using getLock/releaseLock imposes deadlock management on the user? I would think that this would lead to most uses ending with a "safety net" post step:

pipeline {
    stages {
        stage('locking') {
            steps {
                getLock resource: "my_resource"
            }
        }
    }
    post {
        always {
            releaseLock resource "my_resource"
        }
    }
}

dolphy01 avatar Jun 30 '17 16:06 dolphy01

@dolphy01 Aaaah - yeah, with the current code here, you do need that explicit releaseLock() call, but I think I'm going to add a RunListener to auto-cleanup any unreleased locks put in by getLock.

abayer avatar Jun 30 '17 16:06 abayer

Ah, ok. My own personal use cases would all call for the lock to be cleaned up at the end of the job (whether by myself or auto-cleaned).

I don't know if anyone out there maybe has use cases that call for locking across multiple jobs. If that's a valid use case, maybe an option to getLock:

pipeline {
    stages {
        stage('locking') {
            steps {
                getLock resource: "my_cleaned_resource"
                getLock resource: "my_persistent_resource", autoRelease: false
            }
        }
    }  
} <-- "my_cleaned_resource" is cleaned up, "my_persistent_resource" is still locked

Again, that's if you think that's even a valid use case. I personally don't think I would use it.

dolphy01 avatar Jun 30 '17 16:06 dolphy01

@dolphy01 Oh gawd, no, please dear any-and-all-gods-that-may-or-may-not-exist, no cross-builds locking! =)

abayer avatar Jun 30 '17 16:06 abayer

@jglick Actually, I'm pretty sure the locks will always be released so long as LockRunListener.onCompleted gets fired.

abayer avatar Jun 30 '17 17:06 abayer

@jglick as menitioned in the ticket I personaly think this is a great addition as it opens up the possibility to release the lock the moment we don't need it anymore without the need to drop the node aswell (in the structured pipeline syntax). Leaving the node just for releasing the lock soon forces one to deal with stash/unstash and adds a lot more overhead then necessary.

@abayer Is there any chance you add a return value, that contains the locked resources? Seems to be even mandatory when dealing with labeled locks right?

dgeissl avatar Jul 03 '17 08:07 dgeissl

+1 for @oleg-nenashev rename suggestion to acquireLock(). getXYZ() has totally different meaning in Java and current usage of getLock() is very confusing.

mateusz-was avatar Jul 05 '17 09:07 mateusz-was

What happens if you have an agent declaration on any of the stage blocks?

HRMPW avatar Jul 11 '17 01:07 HRMPW

@abayer Any updates on this? When can we expect this will be completed and merged? As it is very missing feature in declarative pipeline.

martinsefcik avatar Sep 13 '17 07:09 martinsefcik

@abayer @jglick Any activity on lockable-resources? This (and quiet a few other pull requests) would significantly improve or fix the plugin, but are not showing any progress.

dgeissl avatar Nov 16 '17 17:11 dgeissl

I've been meaning to comment on this one, but just in case this PR is going the other way - as a user of scripted pipeline I have a need for effectively identical steps to those originally outlined for a similar scenario, and I don't feel like I'm being particularly crazy, either :)

I am at this moment having to write my own helper class in shared library code to facilitate this, since it'll let me halve the time that a pipeline is running for, so it would be nice if this were officially supported.

I can see the rationale for scoped steps for workspaces, directories, stages, etc., but I don't know that we benefit from enforcing this model on lockable-resources.

philmcardle avatar Nov 16 '17 18:11 philmcardle

I have run into a use case where I need to lock multiple stages because each stage runs on a different type of agent.

josephearl avatar Jan 18 '18 08:01 josephearl

Can we rename getLock to acquireLock please? This isn't a getter.

hendrikhalkow avatar Feb 01 '18 05:02 hendrikhalkow

+1 We are using this feature of "lock resource" for dynamic resources, so the releaseLock feature will be very helpfull.

s2oBCN avatar May 18 '18 15:05 s2oBCN

This is a really important feature for me as well. What is the progress, and how can I help?

sofusalbertsen avatar Sep 28 '18 12:09 sofusalbertsen

What's the current status of this? This is a feature I'd like to use too.

antinodes avatar Nov 09 '18 22:11 antinodes

I implemented something similar in https://github.com/jenkinsci/lockable-resources-plugin/pull/179 I don't use the queue but instead boolean tryLock(resource: string).

tomasbjerre avatar Feb 20 '20 04:02 tomasbjerre

Superseded by #179

TobiX avatar Apr 12 '20 22:04 TobiX

Could this be reopened, as #179 was never merged?

sfc-gh-dschumann avatar Mar 31 '21 07:03 sfc-gh-dschumann

So this issue in jenkins was closed as resolved. I couldn't find a way to acquire locks for multiple stages in non-scripted pipelines. Does anyone have an overview over what is currently possible?

sfc-gh-dschumann avatar Mar 31 '21 08:03 sfc-gh-dschumann

@sfc-gh-dschumann One super hacky way of doing this is to create a shallow overstage like this:

    agent any

    stages {
        stage('Hello') {
            options {
  lock('hello')
}
stages {
stage('Hello1') {
            steps {
                echo 'Hello World from inner stage 1'
            }}
            stage('Hello2') {

            
            steps {
                echo 'Hello World from innner stage 2'
            }}
        }
    }}
}

Sorry for the confusing indentation. Screenshot from 2021-03-31 10-59-43 Neither old nor new UI can get that the parent stage is a parent stage, so they look like they are sequential. But it works (tm)

sofusalbertsen avatar Mar 31 '21 09:03 sofusalbertsen

Oh gawd, no, please dear any-and-all-gods-that-may-or-may-not-exist, no cross-builds locking! =)

For our similar use-case, it indeed was for cross-builds locks (back in the age of Multi-Phase jobs, an outer wrapper locked one of the devices we test, and then runs lots of jobs to do the deploy/test/teardown of the SUT. But indeed, those jobs used a provided IP address or similar contact, without caring that there is a lockable resource correlated to that.

In another use-case (that led to the "Steal lock" button) we wanted to allow the devices to be left locked/reserved after the job ends and not recycled for the next test, so developers can do in-vivo postmortems to investigate the bugs (some can take hours to reproduce). This approach explicitly wants open-ended lock support, without automatic unlocking when the job ends. And explicit open-ended unlock support, for cleanup jobs to automate stuff ("Tear down the SUT after post-mortem and release all resources related to that build"), which in particular allows to not give some users permissions to manage locks directly.

So yeah, theories and consistency may be nice, but real life may have nasty requirements :)

And so I am in favor of picking up this PR, and making it mergeable again just so that people in my shoes won't have to reinvent the wheel on the side. Can they shoot themselves in the foot? Sure! Must be documented and all. Should they be allowed to try and shoot themselves in the foot if they know they need it? Also sure!

That said, to the esteemed reviewers and architecture sages who chimed in here over the years - beside ideological quarrels, does this PR introduce any regressions and breakage into the big picture? Namely, for the larger mass of Jenkins users who would not use the explicit steps to acquire and to release the locks, would the plugin behave the same as without this PR?

jimklimov avatar Feb 02 '22 17:02 jimklimov