java
java copied to clipboard
Add independent lock
Description
Add a lock class, not depends on leader election.
Usage high level guideline:
try {
lock.lock(timeout); // blocking
doSomeAction();
} finally {
lock.unlock();
}
Use cases
Actions which require distributed lock.
Solution suggestion
implementation details - high level
- lock:
- try to creates a lease.
- if already exists, wait and try again until created, or timeout reached.
- unlock:
- delete the lease.
Note that I have this implementation ready, so if confirmed, I can try opening a PR for it at extended module. Would appreciate feedback on proceeding with it.
What is the purpose of this class? We have several implementations of Lock already:
https://github.com/kubernetes-client/java/blob/master/extended/src/main/java/io/kubernetes/client/extended/leaderelection/resourcelock/LeaseLock.java
Actions which require a distributed lock. How can lock/unlock usage be achieved with existing lock ? Keep in mind this functionality commonly needed without leader election.
try {
lock.lock(timeout); // blocking
doSomeAction();
} finally {
lock.unlock();
}
Is it that you want to be able to unlock()? If that is the case, I'd prefer that you added that to the existing leader election implementation vs adding a completely new implementation of locking/leader election.
There's not much difference between locking and leader election (except perhaps the unlock part) so I'd prefer to extend the existing implementations.
Existing LeaseLock implements io.kubernetes.client.extended.leaderelection.Lock, which is "strictly for use by the leaderelection code". Adding to it can feel like abuse. I am thinking of implementing java.util.concurrent.locks.Lock, as encountered a need for it, for common usage. What do you think ?
Lock lock = ...;
if (lock.tryLock()) {
try {
// manipulate protected state
} finally {
lock.unlock();
}
} else {
// perform alternative actions
}
I think it's definitely fine to implement java.util.concurrent.locks.Lock but I believe that you can implement that using the LeaderElector class, you don't have to implement the lock acquire/retain/release code yourself.
That code is tricky to get right and we're better off focusing on a single implementation.
Is this interface/implementation the most suitable for the job of a "distributed lock"? As far as I understand, the leader election feature is aimed to "determine the current leader and keep the other replicas as followers", in the sense that the leader will continue to be a leader except when it stops sending updates.
For distributed locks maybe one can use Redisson RLocks for example, or any other distributed lock implementation. We've been using RLocks for a while and it works pretty well.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.