books-futures-explained icon indicating copy to clipboard operation
books-futures-explained copied to clipboard

Confused about Condvar

Open doolyshark opened this issue 4 years ago • 5 comments

Hey, thank you really enjoyed this tutorial.

So we have

while !*resumable {
                resumable = self.1.wait(resumable).unwrap();
}

When thread expires, we call wake, which calls unpark, which will set the value inside the value in Parker's mutex to true; but, confused as to how notify_one works with wait so we can resume the loop. Presumably there is some mechanism in notify_one that finds a thread[id] = waiting_id, but unclear how that happens.

On Unix there are pthread_cond_t, pthread_cond_wait, and pthread_cond_signal, but all descriptions explain the same pattern outlined in your post. Assuming there is only one thread (if there are more, seems to get handed off to scheduler), how does pthread_cond_signal send signal to pthread_cond_wait across thread boundary so it knows to resume?

Also found notes from an OS class which use thr_continue(id), but cannot find much on thr_continue. Apologies if many of these details are unnecessary.

doolyshark avatar Jun 24 '20 20:06 doolyshark

That's very nice to hear.

Now, what you're asking will be platform specific and actually be an OS-implementation detail not related to Futures in Rust, but I'll try to explain what I think happens on a high level:

Condvar::wait(lock) will hook into an OS backed Condvar. The OS keeps track of which threads it parks waiting for an event on this condition variable. It knows that only these thread/threads should be woken when a condition change is signalled using Condvar::notify_one/Condvar::notify_all.

The OS can easily map a condvar_id with multiple thread_id's, which is what I suspect happens when you in Linux call pthread_cond_init. It creates a condvar you can share between threads. When another thread calls wait on that condvar it maps its thread_id to the condition variable id so it know what threads are waiting for the condition to change.

When a thread calls notify_one on that condvar the OS looks up what threads are waiting and wakes up one of them. In the case of notify_all it wakes up all of them.

To be honest, I needed to read up on this myself since I haven't looked into how Condvars are implemented in the OS in a long time. The important aspect here is that this is tightly integrated with the OS. You could implement something like this yourself I think but that is a fun project on its own.

cfsamson avatar Jun 24 '20 22:06 cfsamson

This is pretty much what I expected, but still nice to hear given that these details are abstracted out of the sys/sys_core structs. I knew it was a bit off topic, but, the lines can become a bit blurry so thank you for the response.

One more unrelated question, if don't mind:

Assuming the reactor does some I/O bound work, how does does the reactor know when a non-blocking Task will be able to call wake without a timer. What other sort of data might the Task register that could help?

// aside

It creates a condvar you can share between threads

This is one of the reasons why I was curious - so even after the atomics, we still need to rely on the OS to ensure thread safety.

fooxed avatar Jun 25 '20 03:06 fooxed

Hi.

Assuming the reactor does some I/O bound work, how does does the reactor know when a non-blocking Task will be able to call wake without a timer. What other sort of data might the Task register that could help?

It uses the OS, and an API like epoll, kqueue, iocp or io_uring (depending on the platform) to register interes in events on a resource (like a socket) together with a way to identify the event (this varies between the api's but can be as simple as passing in a token of some kind).

It the uses one thread to make a blocking call. The difference is that this one thread can block and wait on 100 000 events, while normal blocking API's will use one thread to wait for each event.

When the OS wakes the thread since an event is ready, the reactor calls wake on the Waker associated with the event.

This is one of the reasons why I was curious - so even after the atomics, we still need to rely on the OS to ensure thread safety.

Not really, both atomics and mutexes are thread safe but since we want the thread to be parked (so it doesn't consume any CPU resources) until the condition changes we use an Condvar. We could in theory spin loop on an atomic flag to see if it's changed but that will be very inefficient since we waste so many CPU cycles and each atomic load_compare will involve the Cache Coherency mechanism to prevent any writes to that memory location from any other core. This adds overhead, even on cores that is not looping to check the variable.

cfsamson avatar Jun 25 '20 08:06 cfsamson

register interest in events on a resource

Actually meant to ask about this too. Briefly looked into the Mio Interest definition, seems like "interest" is a like a flag that will "register" events like a stream on a resource. So in your example (socket), if our Mio event::Source is a TcpStream, the registry uses a Selector, which will eventually be one of the APIs you reference depending on the OS. Guess I'll have to read Epoll, Kqueue and IOCP Explained with Rust 😄

could in theory spin loop on an atomic flag

Get what your saying, but with I/O bound work this is pretty much never practical, right?

Hi

FYI this is the same user as @doolyshark. Can't close this issue until have access to that account, but feel free whenever.

fooxed avatar Jun 25 '20 18:06 fooxed

Yeah, I think you're on the right track, but it should be even more clear if you read part 1 of the Epoll, Kqueue, IOCP book.

You'll see the API's are pretty different from each other. An interest with respect to I/O is mostly either a Read or a Write interest. The flag is just there to indicate which of them you're interested in, and in combination with a resource (like a filedescriptor on Linux) and a token identifying the exact event you have enough information to make it work across different platforms. (Windows uses a pretty interesting trick with how you identify a specific event though, which is not a traditional "token" but it's nevertheless a way to identify which exact event that's ready).

Get what your saying, but with I/O bound work this is pretty much never practical, right?

Nope. That's a bad idea not only for I/O bound work, but in most cases when you have an OS that can achieve the same without wasting cycles. As always there are exceptions where it might perform better but that's rare.

It's not a big problem to leave it open as someone might have the same question as you and either want to add something to the discussion or see this and find an answer 😃

cfsamson avatar Jun 25 '20 18:06 cfsamson