embedded-hal icon indicating copy to clipboard operation
embedded-hal copied to clipboard

I2c peripheral trait

Open eisterman opened this issue 4 years ago • 13 comments

Although there is an interface for an i2c blocking master, a corresponding slave interface is missing. Are there any ideas or work already done on this?

eisterman avatar Jan 10 '20 15:01 eisterman

Such a trait might be a bit tricky so set up because essentially this would have to be some sort of callback which gets invoked if some master asserts the bus. I guess the callback is even the easiest part but documenting all the invariants to make sure that HAL implements the I2C slave driver correctly might be a bit hairy...

therealprof avatar Jan 10 '20 16:01 therealprof

FWIW, I am building a decoder for smbus protocol that should be agnostic for polling, interrupt, or dma approach. It requires 4 events: Initiated {direction}, RequestedByte{&mut byte}, ReceivedByte{byte}, Stopped. And it requires a command handler trait where functions for read_byte etc. are collected. I want to ultimately export a macro for annotating functions with #[read_byte_data(0x42)] which should generate the custom code for the protocol handling, like derive(smbus) or derive(i2c).

https://github.com/barafael/smbus-slave-state-machine

RTIC based WIP implementation: https://github.com/barafael/stm32f0-rtic-smbus-slave (not up to date with my library though).

barafael avatar Dec 18 '20 23:12 barafael

I needed this for a project I'm working on and have something that's working well for me. The interface looks like this:

pub trait I2cPeripheral {
    /// Returns the next event that needs to be handled.
    fn next_event(&mut self) -> I2cEvent;

    /// Write a byte to I2C. Should only be called when `I2cEvent::NeedByte` is
    /// received.
    fn write_byte(&mut self, byte: u8);
}

#[derive(Eq, PartialEq, Debug)]
#[non_exhaustive]
pub enum I2cEvent {
    None,

    /// Emitted when the I2C controller addresses us in read mode.
    StartRead,

    /// Emitted when the I2C controller addresses us in write mode.
    StartWrite,

    /// Emitted when the I2C controller sends a stop signal.
    Stop,

    /// Emitted when the next byte is required. Call `write_byte`.
    NeedByte,

    /// Emitted when we receive a byte from the I2C controller.
    ByteReceived(u8),

    /// Emitted when an error occurs on the I2C bus.
    Error(I2cError),
}

#[derive(Eq, PartialEq, Debug)]
pub enum I2cError {
    Overrun,
    Underrun,
    BusError,
}

I've used this interface both for polling and interrupt driven. For interrupt driven, I initially did it without RTIC, then with RTIC.

The i2c interrupt just calls next_event then matches on the result. I have another layer on top of this that builds up a protocol layer, which also returns events (ReadRegister, WriteRegister etc).

I initially tried to use callbacks, but it got pretty nasty with ownership and lifetimes, since you need multiple callbacks, one for each type of event and those callbacks all need access to some shared data. Perhaps there's a way to make it nice with callbacks, but I've found an event driven model to work really well.

If there's interest, I can look to upstream this interface to this crate and at least one implementation (stm32g0xx-hal).

I'm just now reading @barafael's comment above, and the design sounds pretty similar.

davidlattimore avatar Apr 22 '21 06:04 davidlattimore

Looks neat, I think the event-based communication makes the most sense, though my worry is that it might not be zero-cost (or at least as low-cost as it can be)? There are a few things between y'alls implementations that I think are worth talking about:

  • I think i like splitting Initiated{Direction} into StartRead and StartWrite is nice, if only because it reduces the number of imports
  • RequestedByte{&mut u8} seems like the cleaner and more efficient way to do it, though I haven't tried it either way yet
  • (for @barafael) It seems like SMBusState has a hard-coded buffer. Is this something we could const genericize? Or is it like part of the SMBus specification? Also this seems like something that should be done at the driver level rather than the HAL level.
  • I think it might be nice to at least explore what a callback-based system might look like, unless your experimentation leads you to believe that it wouldn't be worth looking into.
  • Are all I2C slave peripherals able to both read and write? The I2C master traits are split into separate Read and Write traits, and I'm not sure if that's something we'd need to do for this as well.

The fact that two people have independently come to the same conclusion is pretty reassuring. Do you have a fork repo of embedded-hal that I could use as a basis for seeing if I can implement this on other chips? Are these lines the only ones you added to the HAL repo?

jaxter184 avatar Apr 22 '21 21:04 jaxter184

There is also this pull request on stm32f0xx-hal: https://github.com/stm32-rs/stm32f0xx-hal/pull/124 which also has a similar design.

barafael avatar Apr 22 '21 21:04 barafael

Regarding the buffer size, we could totally "const genericize" it :) I just wasn't aware at the time, and 32 bytes is the maximum block transfer length of SMBus. I2C is not bounded. I think with something like the CommandHandler trait it is possible to still use callbacks but to use a static dispatch function by default. As I described above, I really hope it would be possible to implement proc-macros for deriving that trait :) Regarding Read and Write, in this pdf: https://github.com/barafael/smbus-graph/blob/main/smbus-graph.pdf you can see that the only paths which only need read and not write are "Quick Commmand Read" and "Read Byte".

I think the nice thing about @davidlattimore and my implementations is that they are completely independent of polling/interrupt/DMA and could even work with bitbanged I2C peripherals. We should consider if our implementations really should belong to a HAL. Maybe just the hardware initialization for an I2C peripheral in "slave mode" should be part of the HAL, and any SMBus/I2C slave library can work with that. Just a thought.

barafael avatar Apr 22 '21 22:04 barafael

Regarding optimisation of an event-based API, I think it might be possible for the compiler to make it close to zero cost. If next_event is inlined into the function that matches on the event, then you have some code that produces a few different values followed by some code that branches on those same values. It seems like it should be possible to collapse the set-enum-variant and the check-enum-variant together. Whether the compiler currently does this in practice, I have no idea.

I currently have the code that I pasted above in a separate, internal crate, but the above code is it for the HAL side.

I'm unsure about RequestedByte{&mut u8}, since the store into the hardware tx register needs to be volatile. I have contemplated having NeedByte hold a ByteRequest struct, which in turn would hold a mutable reference to the I2C peripheral then have a method respond(self, byte: u8). Making this work seems tricky, so I guess the question is whether it's worth the extra complexity.

I avoided buffering at this layer and put my buffer(s) in the protocol layer, since my protocol layer knows how many bytes it expects to send and receive. It should be possible to have a struct BufferedI2cPeriperhal<I: I2cPeriperhal> that wraps an I2cPeriperhal and adds buffering. With no buffers, my struct that implements I2cPeripheral is a zero-sized type. All the state is in the hardware.

Something else worth mentioning is that with a couple of extra methods on the trait (start and stop), I think the trait could become a generic, non-blocking interface for I2C regardless of whether operating as a controller or a peripheral.

davidlattimore avatar Apr 22 '21 22:04 davidlattimore

Maybe just the hardware initialization for an I2C peripheral in "slave mode" should be part of the HAL, and any SMBus/I2C slave library can work with that. Just a thought.

Are you suggesting splitting this trait into a separate crate? What would be the hypothetical benefit of that? Modularity?

I'm unsure about RequestedByte{&mut u8}, since the store into the hardware tx register needs to be volatile.

Ah, I see, I wasn't thinking about volatility at all. I think I like the ByteRequest struct concept (I'm assuming it works like), but maybe you're right that it might be overly complex

Glad to see y'all thinking about and discussing this. I intended (read: vaguely considered) PRing what I've been using, but it's really silly compared to both of your implementations

jaxter184 avatar Apr 22 '21 23:04 jaxter184

Are you suggesting splitting this trait into a separate crate? What would be the hypothetical benefit of that? Modularity?

Not the trait, I think, the "trait" being something like what @davidlattimore is proposing. In my crate, there is no such trait (is hardware-agnostic). It just helps with en/decoding the messages sent over some smbus, because I found myself doing it over and over again manually. The benefit would be flexibility, modularity, and keeping the HAL free of specific implementations.

barafael avatar Apr 23 '21 08:04 barafael

What do folks think about how errors should be handled in this kind of trait? In the trait I pasted above, I included an explicit error enum in with the trait definition. The existing blocking::i2c::WriteRead and blocking::i2c::Write traits both leave the error type up to the trait implementation. Allowing traits to define their own error types allows full propagation of error information, which does seem desirable, especially if running on an operating system which could have all sorts of weird errors. However, I've found building layers on top of WriteRead and Write traits to be a bit of a pain due to the errors not being concrete types.

I wonder if a happy medium might be allow implementations to supply their own error type, but to require that error type implement Into<embedded_hal::I2cError>.

pub trait I2cPeripheral<E: Into<I2cError>> {
    fn next_event(&mut self) -> I2cEvent<E>;
    ...
}

pub enum I2cEvent<E: Into<I2cError>> {
    ...
    Error(E),
}

That way implementations of the trait can chose whether to just use embedded-hal's I2cError, or their own error type and users of the trait can chose whether to preserve the original error type or convert to embedded-hal's I2cError. Thoughts?

davidlattimore avatar Apr 26 '21 22:04 davidlattimore

Something else I've been thinking about is how best to handle I2cEvent::StartRead. At least for the hardware I've been using (stm32g071), things work a lot better if I put a byte into the tx buffer as soon as I get a start event. If I wait until the hardware sets the txis bit (the I need a byte bit), then it's much more marginal and I'm likely to end up with an underrun. So I'm inclined to say that StartRead should imply the need to send the first byte. One way we could encode this might be like this:

pub enum I2cEvent {
    ...
    StartRead(ByteRequest),
    ...
    NeedByte(ByteRequest),
}
pub trait I2cPeripheral {
    fn write_byte(&mut self, byte: u8, request: ByteRequest);
}

I'm unsure if there's hardware where sending the first byte when you get a start event isn't the thing to do. If there are, then we could make it StartRead(Option<ByteRequest>). Or perhaps we just say if there's an implementation that doesn't want the first byte straight away, it can buffer the byte.

If we instead say that StartRead doesn't imply a request for a byte, then the implementation will need to produce two events in response to the start condition - a StartRead and a NeedByte. This would require storing some state in order to know that the NeedByte event needs to be sent. It would also mean that interrupt handlers would need to call next_event in a loop until they got I2cEvent::None. Thoughts?

davidlattimore avatar Apr 26 '21 23:04 davidlattimore

I wonder if a happy medium might be allow implementations to supply their own error type, but to require that error type implement Into<embedded_hal::I2cError>.

Makes sense, maybe this is worth creating a separate issue altogether? One thing that I'm not sure about is how it would work with the Never/! type. I think it's fine since it can be converted into any type, but I'm not an expert.

Also should the error go in a Result? i.e. fn next_event(&mut self) -> Result<I2cEvent, E>;? Most of the other fallible trait functions in the HAL return a Result. Not sure if it introduces any performance issues, but it's nice to not have to manually implement stuff like Try/?.

pub trait I2cPeripheral<E: Into<I2cError>> {

Shouldn't the error type be an associated type rather than generic? Like

pub trait I2cPeripheral {
	type Error: Into<I2cError>;
...

So I'm inclined to say that StartRead should imply the need to send the first byte.

Is there any mechanical difference between StartRead and NeedByte? In terms of what the driver has to do in both those cases, I feel like it's the same. Maybe StartRead allows the driver crate to perform some sort of initialization? But I feel like that could be done by storing state in the driver crate instead. My main point is, what should the driver be doing if it receives a StartRead event?

Also, if you have to call write_byte anyway, what's the point of ByteRequest?

Or perhaps we just say if there's an implementation that doesn't want the first byte straight away, it can buffer the byte.

This seems like something that could be done in write_byte. In fact, the implementation I've been using so far takes an array of bytes, and just polls until they're all through. Would that be a bad idea in this case? Something like

fn write_bytes(&mut self, bytes: &[u8]) -> Result<(), Self::Error> {
	let mut idx = 0;
	loop { // probably better to make this an iterator or something
		if self.i2c.isr.read().txe().is_empty() {
			self.i2c.txdr.write(|w| w.txdata().bits(bytes[idx]));
			idx += 1;
			if idx == bytes.len() { break; }
		}
	}
	Ok(())
}

jaxter184 avatar Apr 28 '21 04:04 jaxter184

Regarding errors, there is a lengthy discussion in #229. Proposition 3 here seems like a good candidate but we got lost in details.

eldruin avatar Apr 28 '21 06:04 eldruin