mpu6050
mpu6050 copied to clipboard
Feature: error correction offsets.
Adds offsets for error correction, and an example that can automatically generate offsets.
The example works as the following pseudo-code.
Iterate x times:
Retrieve accelerometer and gyro data (offsets are already added).
Calculate the difference between the read values and the target values.
Change the offsets by the difference multiplied by a fraction.
Repeat
@juliangaal Please, can you still look at this PR?
I can and will, but am short for time atm
I understand. Thank you in advance for your time.
Thank you for your effort. In the mean time (maybe you didn't know) you can use a link to a github repository (your fork in this case) in your Cargo.toml
directly, in case you need the feature ASAP.
just as an idea: why not put the calibration logic into the lib either as a dedicated pub fn calibrate
or directly as part of init
(i'd still put it as a separate fn
, just not a pub
one and call it from init
) and accordingly keep the calibration data private and instead use it when returning the values to return calibrated data?
That might be a good option for bigger projects. But it does pose a problem: as it would be time-consuming (and unnecessary) to recalibrate the sensor at the start of every runtime, we would also need a mechanism to store and load the calibration data from and to a file. I intended this commit to be a quick solution for prototyping (just like this whole repo, as it seems), not code to be deployed by an enterprise.
maybe offer two APIs?
-
calibrate() -> Result<CalibrationData, ...>
-
load_calibration_data(data: &CalibrationData) -> Result<(), ...>
then the consumer can decide:
- to not do any calibration
- do a calibration every time
- do a calibration only if no calibration data has been stored (by the consumer!) and otherwise use the previous data
I'll have a serious look after my thesis is completed, great suggestion @rursprung
@juliangaal I have time at the end of next week to write the basis for this feature and incorporate it into this PR.
perfect. As soon as I can test on my hardware, I'll merge. Please take your time, my thesis deadline is Dec 02.
* do a calibration only if no calibration data has been stored (by the consumer!) and otherwise use the previous data
What exactly do you mean with "the previous data"? I've searched the datasheet and register map for any references of calibration registers or something similar, but found nothing
Work has begun in branch dev. Please also change the base of this request to dev for further changes @HeikoRibberink
* do a calibration only if no calibration data has been stored (by the consumer!) and otherwise use the previous data
What exactly do you mean with "the previous data"? I've searched the datasheet and register map for any references of calibration registers or something similar, but found nothing
sorry if i didn't make this clear ☹️. this is where the "by the consumer" part of the point comes in - it's the responsibility of the consumer to somehow store the calibration data (in some permanent storage). i'm not aware of any calibration registers on the mpu6050 either.
ah ok, that makes sense. Regarding your suggested calibrate
function: Does it calibrate and return the resulted offsets, or only return offsets to then be manually loaded? I guess "the previous data" is produced from calibrate
?
ah ok, that makes sense. Regarding your suggested
calibrate
function: Does it calibrate and return the resulted offsets, or only return offsets to then be manually loaded? I guess "the previous data" is produced fromcalibrate
?
my suggestion before was this API (i've just added in &mut self
now in the quote because i realised that i forgot it before):
calibrate(&mut self) -> Result<CalibrationData, ...>
load_calibration_data(&mut self, data: &CalibrationData) -> Result<(), ...>
calibrate
would compute the calibration offsets, store them internally in the Mpu6050
struct and return them so that the consumer can store it (if wished - otherwise the consumer can just ignore the returned calibration data).
load_calibration_data
would overwrite its internal calibration data (in Mpu6050
) with the data being passed in.
so a consumer could do something like this (pseudocode):
let i2c = ...;
let mut mpu = Mpu6050::new(i2c);
mpu.init(&mut delay).unwrap();
if let Some(calibration_data) = load_calibration_data() { // some function of the consumer to get the data somewhere
mpu.load_calibration_data(&calibration_data)?;
} else {
let calibration_data = mpu.calibrate()?;
store_calibration_data(&calibration_data); // some function of the consumer to persist the data for future usage
}
@HeikoRibberink two questions regarding your calibration code:
- Isn't speed always 1/
DELAY
? - Shouldn't delta be squared?
-
TEST_ITERATIONS
andITERATIONS
must be the same, for calculated errors to make sense, correct?
Thanks for your help and excuse my long, long delay
Isn't speed always 1/
DELAY
?
No, the fact that the default SPEED
I chose, 0.1
is equal to 1/DELAY
(1/10
), is just coincidence. SPEED
is probably a misnomer: it determines how much the offsets are updated per iteration, and thus how much noise affects the offsets: too small, and the offst will never reach the equilibrium, too large and the offset will be affected too much by noise. Finding this balance is up to the user. I don't know what a better name would be: open for suggestions.
Shouldn't delta be squared?
Delta is the difference between the expected reading and the actual reading and the algorithm tries to minimize this difference. In the part where the offsets are calculated, it doesn't really make sense to square it, as you can minimize the difference linearly, and squaring might lead to overshooting.
In the part where the error is calculated, I chose to make the error dependant of delta squared, as that is common practice as far as I know. delta.abs()
would work as an alternative.
TEST_ITERATIONS
andITERATIONS
must be the same, for calculated errors to make sense, correct?
The errors are averaged by multiplying with inverse_i
, which is the same as dividing by TEST_ITERATIONS as f32
. Therefore, the amount of TEST_ITERATIONS
only changes the precision of the final error.
(Side note: it is probably better to divide by TEST_ITERATIONS as f32
than to multiply by inverse_i
, as the first is much clearer. I originally chose the latter as it is, in theory, slightly more performant. I realise now that that doesn't weigh up to the loss of clarity.)
equlibrium
Couldn't this achievement/non-achievement be detected at runtime and returned to the user? Or detect an error that is too large and abort?
equlibrium
Couldn't this achievement/non-achievement be detected at runtime and returned to the user? Or detect an error that is too large and abort?
Good idea; it would take a lot of hassle away from the user of this library. However; how would we implement this without it having the chance to loop infinitely, but keeping a high accuracy?
I guess convergence can be measured in two ways:
- change in offset since last adjustment below threshold
- error below certain threshold
The latter is more indirect, but combining both metrics is a reasonable approximation, I would think.
Infinite loop would simply be prevented with a maximum allowed loops. If the maximum is reached without 1. and 2., the calibration has "officially" failed and parameters will need to be adjusted.