trouble
trouble copied to clipboard
Provide example for accepting slave Peripheral's Connection Parameter Update Request when acting as Central
Howdy.
I'm working with a different albeit similar device to the one in #455. It has the same requirement of being bonded to send Notification data, but this one actually works with the existing BLE 4.2 pairing implementation in trouble.
The devices attempt to coax the Central into using their Preferred Connection Parameters, and this one in particular will forcibly disconnect itself from the Central if the connection doesn't change to use those requested Parameters within a period of time after sending the request.
[INFO] [host] disconnection event on handle 0, reason: Remote User Terminated Connection
Ideally I would just set the connection parameters to those Preferred Parameters prior to initial connection, but that only half-works...
Some extra context:
The Peripheral device will start in a low-update-rate mode, using 72 (90ms) for min/max connection interval, 10s for timeout.
It also has a high-update-rate mode, using 11 (13.75ms) for min/max connection interval, 10s for timeout.
The Central can request the Peripheral to use the higher update rate by writing a specific value to a Characteristic, in response the Peripheral will emit a Connection Parameter Update Request with the high-update-rate mode settings.
When I use the low-update-rate settings for the initial Parameters, the device seems pleased enough and doesn't disconnect after just 30 seconds.
But if I try to use the high-update-rate settings for the initial Parameters, and then tell the device to switch into the high-update-rate mode, the disconnecting behavior returns, although with a different error sometimes?
[INFO] [host] disconnection event on handle 0, reason: Unacceptable Connection Parameters {async_fn#0} @ host.rs:982
Given that I haven't experienced this behavior on other Central devices, as they gracefully update their connection parameters, it smells related to these, but it could be something else the Peripheral is upset about with trouble that I'm unaware of.
I see there is Connection::update_connection_params and Connection::accept_connection_params, but neither of them have any examples or existing invocations that I can find? To add to the confusion, they both have logs for the procedure not being supported, and then proceed to do something with the data anyway.
If you are meant to deal with the events from the Connection with Connection::next(), and use that with accept_connection_params, then an extra snag appears with the fact that ConnectionEvent::RequestConnectionParams just has most of the same named fields as the ConnectParams type, so you have to construct that and then pass it into accept_connection_params (but then the Ok(_) happy path also says that this is not supported..?), so it doesn't feel right.
When turning on Trace-level logs for trouble, I would only see one of the Requests being emitted in the logs, even though I know it would be sending several as I write to the characteristics.
[DEBUG] [l2cap][conn = ConnHandle(0)] connection param update request: ConnParamUpdateReq { interval_min: 72, interval_max: 72, latency: 0, timeout: 1000 }
And another weirdness, is when trying to use accept_connection_params in the naive way I described earlier, the error for trying to accept them will not be emitted until the Peripheral decides to disconnect.
2025-10-08 21:43:03: INFO Connection parameters request procedure not supported, use l2cap connection parameter update res instead
...
2025-10-08 21:43:28: WARN [host] error updating connection parameters for ConnHandle(0): Remote User Terminated Connection
2025-10-08 21:43:28: INFO [host] disconnection event on handle 0, reason: Remote User Terminated Connection
2025-10-08 21:43:38: WARN [link][pool_request_to_send] connection ConnHandle(0) not found
2025-10-08 21:43:38: WARN [host] unable to send data to disconnected host (ignored)
I also can't tell if handling these Requests is a responsibility that is intentionally being given to me to handle as I wish, or if this is something that should be handled implicitly by the runner and something is running awry.
Another small concern I had was in regards to the use of embassy_time's Duration type for ConnectParams's intervals and timeout, primarily due to their use of Ticks instead of discrete nano/micro-seconds. When debug printing them with the default 1MHz Tick Rate, the values are easy enough to decode by mentally moving the decimal point, and precision doesn't seem to be an issue.
But I wonder how both Printing and Precision would interact with non-default tick rates that aren't multiples of 10, especially with all the division operations that are required to go from ms -> ticks -> ms.
It also makes entering values like the 11 min/max connection interval multiple interesting, as I have to first convert it into microseconds as I don't have an easier way to enter 13.75ms, and the precision concerns due to the use of ticks creep into my mind again. 😅
Maybe they're unfounded, but I'd like to be brought to that conclusion if possible.
The update connection params support is quite new, so it can be a bit rough around the edges. So, this is the gist of how we do this in our firmware:
central:
loop {
match connection.next().await {
ConnectionEvent::RequestConnectionParams {} => {
/// Checking if it's ok to do this etc..
conn.accept_connection_params(...);
}
}
}
peripheral:
conn.update_connection_params(...);
As you point out, the params you get in the event does not cover all the params you need to pass to the update/accept methods. This is because they are simply not provided by the BLE HCI specification, and that's annoying. So to deal with that, we hardcode the min/max event lengths to cover both connection params 'variants'.
Regarding the use of embassy-time duration. I think you have a point that in some cases you'd not get the desired accuracy. This is easy to check looking at the HCI spec, which uses units of (.625, 1.250 ms), so the params passed to the controller will be rounded to those values.
I see. How should I calculate those min/max event duration values? The available literature I've seen doesn't mention Events much at all, and the source doesn't explain much, I could very easily be missing something in either case. Bluetooth (and it's many versions) as a specification isn't something I'm entirely familiar with yet.
I was trying a similar flow in my attempts earlier without much success, and with trouble-host 0.5.0 and esp-hal 1.0.0-rc.1, I seemingly get a hard panic when attempting the same method.
Script plus backtrace:
https://gist.github.com/nullstalgia/ae1ce78b7a2f8849d117f34472ae6ac6
I see. How should I calculate those min/max event duration values? The available literature I've seen doesn't mention Events much at all, and the source doesn't explain much, I could very easily be missing something in either case. Bluetooth (and it's many versions) as a specification isn't something I'm entirely familiar with yet.
The event length is the 'window' during which the radio may send or receive a packet. What value you should use depends on which PHY (1m, 2m, coded etc), how many connections want to allow, and the amount of data you typically transfer in a packet.
The event window starts at each connection interval, so the window can never be longer than the connection interval (because it would overlap). The longer the window, the more data you are able to transfer, but at the cost of other connections that will not be able to transfer at the same time, and at the cost of power usage.
So if I have the one Central and one Peripheral, should I be able to just use the known-ahead-of-time connection intervals of 90ms and 13.75ms for my min/max duration values?
For the full context, I have a single ESP32-S3 connecting to my peripheral, and looking at the logs they agree on an MTU of 200.
On another note, is there something I'm doing wrong in my script to cause the panic?