coio-rs
coio-rs copied to clipboard
Coroutine I/O for Rust
This bug was originally found in #53 . The reason was: stdlib uses a `PANIC_COUNT` in TLS to ensure no panic while panicking in runtime. But obviously, Coroutines in coio...
Reproducible example: ```rust extern crate coio; extern crate env_logger; use coio::Scheduler; fn main() { env_logger::init(); Scheduler::new() .run(|| { struct A; impl Drop for A { fn drop(&mut self) { Scheduler::sched();...
`coroutine_unwind` was doing undefined things. I've replaced it with an AtomicBool flag triggering the same unwind code moved to `yield_with`. This isn't optimal yet (I'll try to get it passing...
If I understand it correctly, the coroutines here can move from thread to thread. Now imagine I have something that is not `Send` for whatever reason. And I don't mean...
Now we can only add `#[inline(never)]` on the `Coroutine::yield_with` method. You can reproduce it anytime by replacing it with `#[inline(always)]`, and then call `cargo test --release`, you will see the...
Hi Zonytoo, I'd like to use coio in a public-facing service accepting anonymous TCP connections. If only to avoid file descriptors exhaustion, there has to be a limit on the...
Now in Coio, Coroutine is the minimum execution routine. It would be nice if we can have a `coroutine_local!` macro implementation just like `thread_local!` in `std`. Implementation detail will be...
Minimal test case: ``` rust extern crate coio; use coio::Scheduler; use coio::sync::mpsc; fn main() { Scheduler::new().run(|| { let (tx, rx) = mpsc::sync_channel(1); let h = Scheduler::spawn(move|| { tx.send(1).unwrap(); }); assert_eq!(rx.recv().unwrap(),...
All I/O operations needs to support a timeout parameters, including `read_timeout` and `write_timeout`. This issue was related to #24 and I have implemented an experimental version in branch `experimental-io_timeout`. But...
Right now `Processor`s will wait forever in its own channel until got notified by `ProcMessage`. So it won't try to steal jobs from the other `Processor`s! So we have to...