embassy
embassy copied to clipboard
SPI write without read
We are using a W5500 ethernet controller, which uses a slightly different SPI protocol. Instead of only supporting 1/2/4 byte read and writes, it also supports N-byte read and writes. Likely to make it more efficient to send/receive bigger network packets. For example, sending a packet might look like this:
0f bc 04 11 22 33 44 55 66 77 88 99 aa bb cc dd ee ff
01 02 03 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
header 15 bytes to write to address 0x0fbc
Reading incoming data looks similar, where the master sends 00 bytes and the W5500 returns a byte of packet data for each 00 byte.
If I want to send a packet, and only send a write operation, the operation seems to hang because there are unread bytes in the receive buffer. My current workaround is just reserving a large buffer and doing a read_write, and then discarding the buffer which filled up with 00 bytes:
async fn write_bytes(&mut self, bytes: &[u8]) -> Result<(), SPI::Error> {
let mut padding = [0; 512];
self.1.read_write(&mut padding[..bytes.len()], bytes).await
}
I was wondering if it'd be possible to add a method (if there isn't one already that I missed) to only write bytes, while ignoring the bytes sent in response.
There's .write(), which should ignore incoming bytes indeed. If it's hanging it's a bug.
Which chip are you using?
@darkwater could you confirm which chip you're using?
if it was stm32, there's been a few hang fixes: #663 #664 . Could you check again? Thank you! :)
Sorry for the delay! It is indeed an STM32L476RG. I'm having some trouble testing out the fixes, since we're on our own branch and too much has changed since.
Looking at the fixes, this flush seems like it might solve our issue. In our branch, write_dma_u8 ends with a call to finish_dma, calling spin_until_idle, which seems to wait until the rx fifo is empty.
I'll get back to you as soon as I can actually confirm this.
Going to close this since all these SPI bugs are fixed now AFAIK. If the issue persists please ping and I'll reopen.