Async version?
Is an async version of this library planned? Have you used it in async applications? If yes, how? The embassy-net tcp socket only implements embedded_io_async traits.
No, currently there is no plans to support async IO. Take a look at similar issue: #13
You can add RX and TX buffers and fill RX in your async code. Then in some dedicated "thread" you will take all bytes one by one from RX buffer, pass to cli. In cli pass writer that will add bytes to TX buffer. And in your async code take bytes from TX buffer and write to your IO backend. Of course you will have to handle overflows somehow.
Approach with RX/TX buffers could be supported by library one day but currently I don't have resources for that. But PRs are welcome
What writer do you use for writing to a buffer? Do you implement it for every async project? Is it in a crates.io crate?
I don't use it in async projects, so I don't have a writer. As I said, just create two buffers (RX and TX) behind a mutex. You can do something like this for TX (and for RX everything is similar). Mostly generated by chatgpt, but you get the idea:
use embedded_io::Write as EmbeddedWrite; // Import the embedded_io Write trait
use heapless::spsc::Queue;
use critical_section::Mutex;
pub struct BufferWriter<const BUF_SIZE: usize> {
queue: Mutex<Queue<u8, BUF_SIZE>>, // Thread-safe queue for TX buffer
}
impl<const BUF_SIZE: usize> BufferWriter<BUF_SIZE> {
pub const fn new() -> Self {
Self {
queue: Mutex::new(Queue::new()),
}
}
/// Push a byte into the buffer. Returns `Err` if the buffer is full.
pub fn push(&self, byte: u8) -> Result<(), ()> {
critical_section::with(|cs| {
let mut queue = self.queue.borrow_ref_mut(cs);
queue.enqueue(byte).map_err(|_| ()) // Return error if the buffer is full
})
}
/// Pop a byte from the buffer. Returns `None` if the buffer is empty.
pub fn pop(&self) -> Option<u8> {
critical_section::with(|cs| {
let mut queue = self.queue.borrow_ref_mut(cs);
queue.dequeue()
})
}
/// Check if the buffer is empty.
pub fn is_empty(&self) -> bool {
critical_section::with(|cs| {
let queue = self.queue.borrow_ref(cs);
queue.is_empty()
})
}
/// Check if the buffer is full.
pub fn is_full(&self) -> bool {
critical_section::with(|cs| {
let queue = self.queue.borrow_ref(cs);
queue.is_full()
})
}
}
/// Implement the embedded_io::Write trait
impl<const BUF_SIZE: usize> EmbeddedWrite for BufferWriter<BUF_SIZE> {
type Error = (); // Define the error type (simple `()` for now)
fn write(&mut self, data: &[u8]) -> Result<usize, Self::Error> {
let mut written = 0;
for &byte in data {
if self.push(byte).is_err() {
break; // Stop writing if the buffer is full
}
written += 1;
}
Ok(written) // Return the number of bytes successfully written
}
fn flush(&mut self) -> Result<(), Self::Error> {
// For this implementation, flushing is a no-op
Ok(())
}
}
This writer is passed to cli. And somewhere in your code you should do writer.pop() and write received byte to your async IO.
So you just implement it every time, right?
No, as I said, I'm not using async in my embedded projects. And in sync projects writer is just a small wrapper around a buffer. So there's nothing to extract to separate crate.
Can you show me an example? I don't get how I'm supposed to flush the buffer out if I move the writer to the cli when I build it.
Take a look at api of heapless queue. There is a method for splitting: https://docs.rs/heapless/latest/heapless/spsc/index.html
writer then will have consumer and you use producer to take out bytes that were written