async-std
async-std copied to clipboard
RwLock needs a task-fair locking policy
Long story short, I've audited at least three different async RwLock implementations 😁 and they all have the same problem; read-heavy workloads will starve writers.
In the case of async-std, the starvation happens here: https://github.com/async-rs/async-std/blob/125fa5b0a026088f16fb4c1e2df6375d0b30c0e5/src/sync/rwlock.rs#L195-L201 by incrementing the reader count without regard for the number of writers waiting to acquire the lock.
See also:
- https://docs.rs/parking_lot/0.10.0/parking_lot/type.RwLock.html
- https://github.com/asomers/futures-locks/issues/34
- https://github.com/rust-lang/futures-rs/pull/2082#issuecomment-586752557
@parasyte That looks like a good catch! Would you feel like proposing in a patch?
I've been comparing against parking_lot::RwLock. That implementation is admittedly very complex. And in all honesty, I'm not an expert in task scheduling.
Fairness is mostly achieved by scheduling tasks in the order that they attempt to acquire the lock. I can imagine this working in async-std as a potentially-interleaved FIFO queue of wakers. I'm not really sure where or how to start on that...
@parasyte Okay, thanks for the report! I'm assigning this to someone else :).
I think it's generally a good idea to consider priority policies in a RwLock. According to the Wikipedia page, there are at least three of them: read-preferring, write-preferring and unspecified. Only a write-preferring RwLock is needed to avoid the writer starvation problem mentioned in this issue. Are we sure that we want a fair lock that also avoids starving the readers?
Write-preferring locks do not starve readers, at least not in the same sense that read-preferring locks can guarantee writers will be unable to acquire a lock at all in the presence of a steady stream of readers. In a write-preferring lock, a steady stream of writers will not be able to prevent readers from making progress. They can only limit concurrency of readers.
For example, a worst-case scenario would be a perfect interleaving of 1:1 readers and writers in sequence, causing the lock to act almost exactly like a Mutex. Meanwhile the ideal case will have a large number of readers and small number of writers.
The worst-case scenario depends on the lock's policy. If the lock is taken by a writer while a reader's request comes in, the request needs to be queued to ensure that the reader gets a chance to read when the writer finishes. I believe this is what you're referring to in your comments above.
Another policy is simply to reject the reader's request. When the writer finishes, the lock is again open for anyone to grab. If the writer is really busy, it can re-acquire the lock before any reader does and effectively starve the readers.
I think both policies may have their own use cases. For example, in scenarios where the writer is much more important than the reader, the latter policy (which is really unfair to the readers) will be the desired behavior.
Is a write preferring RWLock still being considered for async-std?
I am using RWLock to allow infrequent updates to application-wide configurations - in my use case a write preferring RWLock would allow me to guarentee that config updates will be applied.
For reference on a similar design here is Java's ReentrantReadWriteLock (see "fair mode" specifically): https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/locks/ReentrantReadWriteLock.html
Relevant: https://www.reddit.com/r/rust/comments/f4zldz/i_audited_3_different_implementation_of_async/
Hello everyone, we are suffering some starvation problems due to this issue on our protocol routing stack. We have workarounds, but it would be great to know if there is a plan to tackle this important issue in a patch release of version 1.6.
Thanks very much for the great work on async-std.
A+
Published a reader-writer lock with a fair locking strategy: https://github.com/stjepang/async-rwlock
During my lunch break I attempted to write a quick patch to replace async-std's internal rwlock impl with async-rwlock. Unfortunately I ran into a compat issue between the two APIs: https://github.com/stjepang/async-rwlock/issues/1 — if this could be resolved it'd be really neat; everyone using async-std could get a better rwlock!
I have published a new version of ~~async-mutex and async-rwlock~~ that supports T: ?Sized.
EDIT: The crate is async-lock:
- https://docs.rs/async-lock/2.3.0/async_lock/struct.Mutex.html
- https://docs.rs/async-lock/2.3.0/async_lock/struct.RwLock.html
@stjepang as a quick follow-up, is that fair version you referenced integrated into any public crates? This repo is public, the referenced repo 404s for me, and this issue is open so just wondering. Would love to see a fair/write-preferring option. Thanks!
fwiw async-rwlock is now at https://github.com/smol-rs/async-lock
fwiw
async-rwlockis now at https://github.com/smol-rs/async-lock @stjepang sweet, thanks!