futures-rs icon indicating copy to clipboard operation
futures-rs copied to clipboard

Missed wake call using `.shared()`

Open steffahn opened this issue 2 years ago • 4 comments

This is the reproduction:

use std::{
    io::Write,
    pin::Pin,
    task::{Context, Poll},
};

use futures::prelude::*;

pub async fn yield_now() {
    /// Yield implementation
    struct YieldNow {
        yielded: bool,
    }

    impl Future for YieldNow {
        type Output = ();

        fn poll(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<()> {
            if self.yielded {
                return Poll::Ready(());
            }

            self.yielded = true;
            cx.waker().wake_by_ref();
            Poll::Pending
        }
    }

    YieldNow { yielded: false }.await
}

#[tokio::main]
async fn main() {
    for _ in 0..200 {
        print!(".");
        std::io::stdout().flush().unwrap();
        for _ in 0..10000 {
            test().await;
        }
    }
    println!();
}

async fn test() {
    let f1 = yield_now().shared();
    let f2 = f1.clone();
    let x1 = tokio::spawn(async move {
        f1.now_or_never();
    });
    let x2 = tokio::spawn(async move {
        f2.await;
    });
    x1.await.ok();
    x2.await.ok();
}

It manages to hang up eventually on most runs, both in debug and release mode.

The example is crafted to expose a weakness spotted during code review of .shared(). The problematic part is here:

https://github.com/rust-lang/futures-rs/blob/aa1f5c74251bb494d1bc7a016381c76f909316bc/futures-util/src/future/future/shared.rs#L288-L293

This code is simply insufficient to ensure the current context’s waker is actually ever woken up. The current context’s waker was stored into the inner slab previously in this line

https://github.com/rust-lang/futures-rs/blob/aa1f5c74251bb494d1bc7a016381c76f909316bc/futures-util/src/future/future/shared.rs#L277

However, a concurrently running poll could have already been within the call to the inner poll before the new waker got registered, i.e. here:

https://github.com/rust-lang/futures-rs/blob/aa1f5c74251bb494d1bc7a016381c76f909316bc/futures-util/src/future/future/shared.rs#L328

As soon as the call to poll on the inner future starts, it could already be calling the waker, thus doing the waking before the new waker that we care about was registered and put into the slab in the first place, because the waking could happen before the state is put back into IDLE.

The waking could happen before the state is put back into IDLE in particular because it is okay for a .poll call to do the waking immediately, before it returns, i.e the way the yield_now function in the example always does (copied from the old yield_now implementation of tokio).

steffahn avatar Feb 07 '23 09:02 steffahn

Some initial thought for a potential fix: Add some additional (3 states) state enum next to the Slab in the to the Mutex inside the wakers field.

Right before calling the inner future’s poll method, set this state from the “default” state to “inside inner poll”. The Notifier::wake_by_ref implementation would (operate as it already does and additionally) inspect this state, and if it is in “inside inner poll” state change it to “woken while inside inner poll”.

Then after the call to the inner future’s poll method, if the poll result is Pending then

  • first lock the Mutex
  • next change the notifier state from POLLING to IDLE
  • then look into the Mutex contents and change the state back to “default”
    • if the state was in “woken while inside inner poll”, awake all (new) wakers immediately (there will only be those wakers left that arrived during the poll operation, anyways
  • finally, unlock the Mutex

This way, a thread that observes the POLLING state and returns can know that their record_waker call accessed the Mutex before the currently polling state re-locks the Mutex to handle wakes-during-polling, so their registered waker is never ignored.


In more detail:

The previously problematic setting was when a record_waker call happens while the inner .poll call is already in progress and after this call already executed its guaranteed eventual wakeup; and when after such a record_waker call, POLLING state belonging to the same poll to the inner future is still observed. Otherwise there is no problem because either there would already have been a subsequent poll that does consider the registered waker, or we would be doing such a poll ourselves (or the future could be finished).

This setting means that first the polling thread records “inside inner poll” state, and then the poll happening during execution of the inner poll will put it into “woken while inside inner poll” state. Then the problematic thread calls record_waker, adding a new waker to the wakers with the “woken while inside inner poll” state. Afterwards this thread observes POLLING, so that this record_waker is known to happen before the change from POLLING to IDLE in the idle thread. This POLLING to IDLE change happens while the mutex is being held that was also accessed by record_waker, so that mutex acquiry also happened after the call to record_waker. Hence the polling thread will observe both the “woken while inside inner poll” and a list of wakers including the newly registered one, which can then be awoken.


I guess there’s some leeway implementation on the point whether a Notifier::wake_by_ref while in (one of) the “inside inner poll” state(s) should perhaps only update the state (to “woken while inside inner poll”) and not do any waking yet, as any awoken task would run into a currently-POLLING state anyways, and would then need to be re-awoken soon after.

Perhaps more importantly, I have not thought about panic case(s) yet. Those would presumably also need to be handled somewhat properly.

steffahn avatar Feb 07 '23 11:02 steffahn

Trying to start an implementation of a fix, I came across a design question where I’m not sure what the best solution should be.

If the poll method panics, then the caller of the Shared instances currently being polled will get that panic propagated properly, but all other clones of the Shared will potentially never learn about the panic, since – as far as I’m aware – a panicking poll does give no promises as to whether or not the waker in the context you gave it will actually ever be awoken.

The possible solution is to make sure to wake all other wakers in case of panic, so they have a chance to learn of the panicking poll themselves. (From their point of view, they get awoken, then call poll and then will get a panic themself, which is the desired behavior, AFAICT, unlike the current approach of potentially just leaving them unawoken indefinitely without ever experiencing any panicking poll call themself.)

Now the problem with this approach: Calling a bunch of wakers in a panic case could be done through code in a Drop implementation, with the effect that if any of the wakers panic on the .wake call, that’s a double-panic / abort. So the question thus is whether we can simply assume that reasonable wakers never panic, so the possibility of abort would be considered “fine”. Or alternatively, I guess, it could be a reasonable approach to use catch_unwind instead of destructors, and that way a panicking waker would not result in immediate abort; but if panicking wakers are a thing to consider, there’s follow-up questions such as: If one waker panics, do we still need to awake all subsequent wakers in the Slab? If yes, that’s a bunch of code, and a lot of catch_unwind invocations though…

So really TL;DR, how bad are missed wake-ups in cases where panics happened somewhere? (I suppose bad enough that they do warrant a fix.) And how seriously should the case of panicking wakers be considered? (I’m unsure on this, and interested in hearing about guidelines or prior art on the question of panicking wakers.)

steffahn avatar Feb 11 '23 07:02 steffahn

Wanted to pop in and say thank you for working on this issue. I went down a rabbithole today debugging an application that uses Shared dropping futures on the ground. This appears to be the root cause for my bug.

LPGhatguy avatar Feb 13 '23 06:02 LPGhatguy

Thanks for working on this!

how bad are missed wake-ups in cases where panics happened somewhere?

It is not ideal, but fine. (Shared is neither UnwindSafe nor RefUnwindSafe. And IMO, a potential abort is bad anyway.)

guidelines or prior art on the question of panicking wakers

I do not know of any prior art or guidelines for this particular case, but https://github.com/tokio-rs/tokio/pull/3689 is somewhat relevant.

taiki-e avatar Feb 28 '23 15:02 taiki-e