async-std icon indicating copy to clipboard operation
async-std copied to clipboard

RwLock needs a task-fair locking policy

Open parasyte opened this issue 5 years ago • 15 comments

Long story short, I've audited at least three different async RwLock implementations 😁 and they all have the same problem; read-heavy workloads will starve writers.

In the case of async-std, the starvation happens here: https://github.com/async-rs/async-std/blob/125fa5b0a026088f16fb4c1e2df6375d0b30c0e5/src/sync/rwlock.rs#L195-L201 by incrementing the reader count without regard for the number of writers waiting to acquire the lock.

See also:

  • https://docs.rs/parking_lot/0.10.0/parking_lot/type.RwLock.html
  • https://github.com/asomers/futures-locks/issues/34
  • https://github.com/rust-lang/futures-rs/pull/2082#issuecomment-586752557

parasyte avatar Feb 16 '20 21:02 parasyte

@parasyte That looks like a good catch! Would you feel like proposing in a patch?

skade avatar Feb 16 '20 23:02 skade

I've been comparing against parking_lot::RwLock. That implementation is admittedly very complex. And in all honesty, I'm not an expert in task scheduling.

Fairness is mostly achieved by scheduling tasks in the order that they attempt to acquire the lock. I can imagine this working in async-std as a potentially-interleaved FIFO queue of wakers. I'm not really sure where or how to start on that...

parasyte avatar Feb 17 '20 00:02 parasyte

@parasyte Okay, thanks for the report! I'm assigning this to someone else :).

skade avatar Feb 17 '20 07:02 skade

I think it's generally a good idea to consider priority policies in a RwLock. According to the Wikipedia page, there are at least three of them: read-preferring, write-preferring and unspecified. Only a write-preferring RwLock is needed to avoid the writer starvation problem mentioned in this issue. Are we sure that we want a fair lock that also avoids starving the readers?

ghost avatar Feb 26 '20 04:02 ghost

Write-preferring locks do not starve readers, at least not in the same sense that read-preferring locks can guarantee writers will be unable to acquire a lock at all in the presence of a steady stream of readers. In a write-preferring lock, a steady stream of writers will not be able to prevent readers from making progress. They can only limit concurrency of readers.

For example, a worst-case scenario would be a perfect interleaving of 1:1 readers and writers in sequence, causing the lock to act almost exactly like a Mutex. Meanwhile the ideal case will have a large number of readers and small number of writers.

parasyte avatar Feb 26 '20 08:02 parasyte

The worst-case scenario depends on the lock's policy. If the lock is taken by a writer while a reader's request comes in, the request needs to be queued to ensure that the reader gets a chance to read when the writer finishes. I believe this is what you're referring to in your comments above.

Another policy is simply to reject the reader's request. When the writer finishes, the lock is again open for anyone to grab. If the writer is really busy, it can re-acquire the lock before any reader does and effectively starve the readers.

I think both policies may have their own use cases. For example, in scenarios where the writer is much more important than the reader, the latter policy (which is really unfair to the readers) will be the desired behavior.

ghost avatar Feb 26 '20 09:02 ghost

Is a write preferring RWLock still being considered for async-std?

I am using RWLock to allow infrequent updates to application-wide configurations - in my use case a write preferring RWLock would allow me to guarentee that config updates will be applied.

For reference on a similar design here is Java's ReentrantReadWriteLock (see "fair mode" specifically): https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/locks/ReentrantReadWriteLock.html

D1plo1d avatar Apr 21 '20 22:04 D1plo1d

Relevant: https://www.reddit.com/r/rust/comments/f4zldz/i_audited_3_different_implementation_of_async/

D1plo1d avatar Apr 21 '20 22:04 D1plo1d

Hello everyone, we are suffering some starvation problems due to this issue on our protocol routing stack. We have workarounds, but it would be great to know if there is a plan to tackle this important issue in a patch release of version 1.6.

Thanks very much for the great work on async-std.

A+

kydos avatar May 26 '20 14:05 kydos

Published a reader-writer lock with a fair locking strategy: https://github.com/stjepang/async-rwlock

ghost avatar Aug 19 '20 20:08 ghost

During my lunch break I attempted to write a quick patch to replace async-std's internal rwlock impl with async-rwlock. Unfortunately I ran into a compat issue between the two APIs: https://github.com/stjepang/async-rwlock/issues/1 — if this could be resolved it'd be really neat; everyone using async-std could get a better rwlock!

yoshuawuyts avatar Aug 24 '20 12:08 yoshuawuyts

I have published a new version of ~~async-mutex and async-rwlock~~ that supports T: ?Sized.

EDIT: The crate is async-lock:

  • https://docs.rs/async-lock/2.3.0/async_lock/struct.Mutex.html
  • https://docs.rs/async-lock/2.3.0/async_lock/struct.RwLock.html

ghost avatar Aug 29 '20 13:08 ghost

@stjepang as a quick follow-up, is that fair version you referenced integrated into any public crates? This repo is public, the referenced repo 404s for me, and this issue is open so just wondering. Would love to see a fair/write-preferring option. Thanks!

imuni4fun avatar Dec 02 '20 07:12 imuni4fun

fwiw async-rwlock is now at https://github.com/smol-rs/async-lock

Fishrock123 avatar Dec 02 '20 18:12 Fishrock123

fwiw async-rwlock is now at https://github.com/smol-rs/async-lock @stjepang sweet, thanks!

imuni4fun avatar Dec 05 '20 01:12 imuni4fun