DistributedLock icon indicating copy to clipboard operation
DistributedLock copied to clipboard

Consider throttling the number of waiters in memory

Open madelson opened this issue 4 years ago • 0 comments

In https://github.com/madelson/DistributedLock/issues/38, we consider a 2-layer locking approach where an in-memory lock must be taken before making any calls to the distributed layer. The idea is to reduce impact on the distributed layer when there is contention for the same lock within a single application.

The downside of this approach, though, is that it disadvantages threads within a single application vs those in other applications. Another downside is that RW locks are a bit tricky to implement.

An alternative approach would be to throttle the number of waiters rather than having a full separate layer. For example, let's say a named in-memory semaphore was taken and released while waiting to acquire a lock. This gives us a great deal of flexibility: we can allow N waiters to be waiting on the underlying distributed implementation while queuing the rest behind the semaphore. This might offer a good balance between keeping the natural behavior of distributed locking and preventing resource exhaustion from in-process contention.

madelson avatar Dec 23 '21 15:12 madelson