klipper icon indicating copy to clipboard operation
klipper copied to clipboard

adxl345: Added automatic accelerometer calibration

Open dmbutyugin opened this issue 3 years ago • 57 comments

This adds ACCELEROMETER_CALIBRATE command and a lightweight calibration test, which runs some sanity checks, detects the accelerometer orientation, required axes scaling, etc. The results of the calibrations can be persisted in printer.cfg instead of deducing axes_map.

Signed-off-by: Dmitry Butyugin [email protected]

dmbutyugin avatar Nov 16 '21 20:11 dmbutyugin

I implemented this autocalibration process following some discussions in #4560. While the resonance test changes may not get implemented at all or get implemented some other way, I see, in principle, a value of this calibration process as a stand-alone thing. It can be recommended during initial accelerometer setup, because it runs some checks and may help users identify issues with their setup. Also, while axes_map is not required, it may be more straightforward to the users to run this test instead of figuring out the correct orientation themselves. Besides, this test can detect an arbitrary adxl345 orientation and generate a correct axes_transform for it.

How I tested that it works correctly: I simply misaligned the accelerometer on purpose by unscrewing one of the screws, rotating it, and fixing it just with the other screw: 20211116_212830 After that, both the adxl345 orientation, as well as inverse axes mapping, were detected apparently correctly (but I naturally couldn't precisely measure the accelerometer angle independently). In terms of repeatability, running the calibration process a few times gave the results:

Detected gravity direction: -0.007101, -0.010069, 1.005166 Detected gravity direction: -0.007039, -0.010072, 1.004979 Detected gravity direction: -0.007320, -0.010433, 1.005186 Detected gravity direction: -0.007744, -0.010030, 1.005203

Detected x direction: 0.742505, -0.617116, -0.010375 Detected x direction: 0.741635, -0.609385, -0.012458 Detected x direction: 0.741688, -0.607511, -0.009683 Detected x direction: 0.744411, -0.615639, -0.012982

So, the results happened to be within +/- 0.01.

It would be great to get feedback from other folks potentially interested in this feature.

dmbutyugin avatar Nov 16 '21 20:11 dmbutyugin

This was also a good opportunity for me to check how accelerometer works when it is misalined (and compare it with the case when it is aligned): aligned_vs_unaligned (here aligned is when the accelerometer is mounted correctly, unaligned - rotated, uncalibrated - running the resonance test as-is without prior accelerometer calibration and calibrated - after running ACCELEROMETER_CALIBRATE)

It seems that the PSD (power spectral density) calculation - summing up the energy between axes - is physically more or less sound, and the sum PSD matches very well between different configurations. Small differences between different versions are likely the results of some scaling applied + perhaps some small changes in the response due to different mounting in aligned vs unaligned state. But it is really interesting that while the sum (X+Y+Z) is pretty much the same, the inputs to these sums can be quite different:

aligned_uncalibrated unaligned_uncalibrated
Aligned, uncalibrated Unaligned, uncalibrated

dmbutyugin avatar Nov 16 '21 21:11 dmbutyugin

BTW, @KevinOConnor, this PR contains a commit e822a5a3a6464e2c762be32d2f2aafe4ebe193cb which fixes a race condition in the accelerometer start/stop code that I ran into. A problematic sequence of code execution may occur when start of the next measurements happens 'too soon' after the previous completion of the measurements:

  • the main thread calls ADXL345QueryHelper.finish_measurements();
  • finish_measurements waits for moves to be completed and calls cconn.finalize(); this only marks the channel as closed but does not stop the measurements;
  • the main thread continues execution and data analysis;
  • event thread is activated (e.g. it acquires GIL or the main thread gives way) and tries to dispatch the next portion of messages, finds InternalDumpClient connection closed, calls APIDumpHelper._stop();
  • APIDumpHelper._stop() ultimately calls ADXL345._finish_measurements();
  • _finish_measurements() finds measurements in progress, attempts to send query_adxl345_end_cmd;
  • the main thread may resume at this point and try to start the next batch of measurements and call ADXL345.start_internal_client();
  • after a few calls within APIDumpHelper, ADXL345._start_measurements() is called and finds that the measurements are in-progress, thus it skips ADXL345 reading initialization;
  • event thread is eventually reactivated and completes measurements shutdown; no ADXL345 data is acquired after this point.
  • the second measurement attempt, after calling to ADXL345QueryHelper._finish_measurements(), finds no data.

I fixed the race condition as per that commit. However, I'm not sure I did everything right for Klipper organization of threads (and I noticed that if I called start_stop_lock.acquire(True) instead, e.g. via with start_stop_lock, that leads to deadlocks), so I'd ask to pay extra attention to this commit during the review. Maybe there is even an easier way to fix that race condition. But at least it seems that another way - reordering statements

        params = self.query_adxl345_end_cmd.send([self.oid, 0, 0])
        self.query_rate = 0

- may be fragile and only make race conditions less likely to occur.

dmbutyugin avatar Nov 16 '21 21:11 dmbutyugin

Interesting, thanks. Definitely looks useful.

I'll need to take a closer look at this and run some tests with it. I have some random comments in the mean time:

  1. I'll take a closer look at e822a5a and the bug.
  2. I noticed this feature would require numpy to be installed.
  3. If I understand this code correctly, it assumes the printer's Z axis is always aligned to gravity. I'm not sure that's a good assumption. That is, if the user had their printer at a slight angle then Z axis moves would not be aligned with gravity and the test wouldn't be accurate? As an alternative, could the code perform Z move tests the same way it performs XY move tests?
  4. It looks like the code works by moving the toolhead from point p1 to p2, calculates the average acceleration between the midway point (p1 to p_mid_12 and p_mid_12 to p2), and compares that to the expected acceleration during that time range. Is that correct?
  5. Shouldn't the code subtract out the freefall acceleration prior to calculating the average acceleration? Otherwise, wouldn't gravity skew the magnitude sanity checks? I feel I must be missing something here.
  6. If I understand the approach correctly, the results might be skewed due to the hard cutoff of time selection for p1, p_mid_12, and p2. If there is resonance (either from belt resonance or due to stepper detent forces resulting from choice of p1 and p2) then that resonance might introduce a bias?
  7. I wonder if a slightly different approach might be simpler. A move test could pause the toolhead for 200ms, move from p1 to p2, and then pause for 200ms. One can then subtract freefall data from the resulting accelerometer data and do "dead reckoning" on it to determine how far the accelerometer measured the toolhead movement. (That is, do "dead reckoning" by integrating the data to velocity and then integrating again to a position.) The scaling factors can then be calculated by comparing the measured distance travelled to the actual distance travelled. The gain, I'd guess, is that requested move acceleration isn't particularly important to the calculation, and resonances shouldn't adversely impact the measurement as the only time cut-offs are after the toolhead has been given ample time to settle. If I get a chance I'll try to play with some sample code for this.

Thanks again, -Kevin

KevinOConnor avatar Nov 19 '21 05:11 KevinOConnor

Just for the fun of it:

Calibrating z axis
Detected gravity direction: -0.085790, 0.041609, 1.028055
SAVE_CONFIG command will update adxl345 configuration with gravity = -841.3,408.0,10081.8 parameter
Calibrating x axis
Detected x direction: 1.037004, -0.004343, 0.016995
Calibrating y axis
Detected y direction: 0.016625, 0.944769, -0.025575
Computing axes transform
SAVE_CONFIG command will also update adxl345 configuration with axes_transform =
0.962927977,-0.014753047,0.080952239
0.005122451,1.057223008,-0.042362380
-0.015791161,0.026544849,0.970318469

Link to the raw files and PNGs of 3 runs (in timestamp order):

  1. Current Git
  2. Current Git
  3. This PR

https://drive.google.com/file/d/1XPp_qVr9-ZljfEn2fqsM4yjGyCYfyOEx/view?usp=sharing

Sineos avatar Nov 19 '21 16:11 Sineos

Thanks for taking a look, Kevin!

1. I'll take a closer look at [e822a5a](https://github.com/Klipper3d/klipper/commit/e822a5a3a6464e2c762be32d2f2aafe4ebe193cb) and the bug.

OK, thanks!

2. I noticed this feature would require numpy to be installed.

Yes. The installation guide first says to install numpy, and then proceed with the measurements, so it should be OK. Specifically, numpy is used for

  1. Fast integration (well, averaging). This makes data processing very fast and allows to do it in the main Klipper thread without the risk of blocking it for too long (I think).
  2. Computing the matrix inverse (implementation from LAPACK, probably more numerically robust than a simple implementation one would write) and vector cross product (this is really for convenience).

In general, all of that could be implemented in Python. But if we are requiring numpy installation for resonance testing anyway, why not take advantage of it?

Separately, after the full migration to Python 3, I'm thinking we may make numpy installation a default - in this case, pip should be able to fetch a pre-built package for all popular platforms, so it won't take long to install (and probably it does not take that much space in the compiled form).

3. If I understand this code correctly, it assumes the printer's Z axis is always aligned to gravity.  I'm not sure that's a good assumption.  That is, if the user had their printer at a slight angle then Z axis moves would not be aligned with gravity and the test wouldn't be accurate?  As an alternative, could the code perform Z move tests the same way it performs XY move tests?

It is not possible to make a Z move in all instances - e.g. on a bed slinger, a mounted accelerometer can only move in one direction (Y axis, typically). So, the other axes must be inferred using some other way (but then, realistically, the orientation of other axes is not that important anyway). Then, when Z movement is possible, on many kinematics the acceleration is additionally limited in the config to a very small value (and it is not override-able today). And testing using very small acceleration (e.g. 100 mm/sec^2) will increase the noise and reduce the precision accordingly.

But an alternative could be this: I already figure the 3rd axis when only 1 of X and Y is testable. This is done via cross-product of the two other axes. When the accelerometer is mounted to the toolhead that can move in X and Y direction, we can use them as reference and make Z orthogonal to them by computing cross-product too. This will ensure orthogonality between Z and other axes and prevent energy 'leakage' from the vibrations of axis X or Y onto Z. And even when only 1 axis is 'testable', it might still be worthwhile to make Z axis and that other axis orthogonal (e.g. with X axis and gravity vector G, we can do Y = G x X, Z = X x Y). The only question is about which scale to use on Z axis then.

Just to note: I'm not brushing away your idea here, but simply trying to explain my reasoning for not choosing Z motion in the test. My idea was to create a light-weight test that can be run on printers of any kinematics and, hopefully, without the need to adjust printer.cfg config prior to running this test. So that it can be executed as a first step in the resonance testing, for instance. But in general, determining the orientation of the other axes that cannot be tested directly is the biggest open question in this whole thing and I'm open to suggestions on how to improve it or do it some other way.

4. It looks like the code works by moving the toolhead from point `p1` to `p2`, calculates the average acceleration between the midway point (`p1` to `p_mid_12` and `p_mid_12` to `p2`), and compares that to the expected acceleration during that time range.  Is that correct?
6. If I understand the approach correctly, the results might be skewed due to the hard cutoff of time selection for `p1`, `p_mid_12`, and `p2`.  If there is resonance (either from belt resonance or due to stepper detent forces resulting from choice of `p1` and `p2`) then that resonance might introduce a bias?

Your understand of the test is correct. To be more precise, the velocity and acceleration look like this:

test

So, it basically goes like this from p1 to p2 and back: acceleration, cruising, deceleration, acceleration, cruising, deceleration, ... Note that 'deceleration, acceleration' have the same sign when projected onto the axis of the motion. This means that integration from p_mid_12 to p2 and p2_ to p_mid_12 goes with the same sign, and the splitting point p2 is really arbitrary (in fact, it is done purely to simplify the integration loop). So the integration goes more like [p_mid12 -> p_mid12] over p2 part with a minus sign, and over p1 - with a plus sign. p_mid_12 is chosen when there is no toolhead acceleration with a large safety margin (only the vibration from steppers, fans, resonances, etc.), so we don't need to get precisely to the mid point between p1 and p2 - as long as we do integration by full half-periods, we get exactly a/2 for the toolhead acceleration in each integration half-period after averaging over it. And because of that, and with different signs on each half-periods, my hope is that any random noise will cancel out. And given the test length L = 4mm = 2 * GT2 step, I hope that most of the biases can also be avoided.

FWIW, I ran this ACCELEROMETER_CALIBRATE test in several points across the buildplate (I tried to choose them such that they are not some multiple of GT2 belt or stepper microstep away), and did not notice any differences between the measurements outside of their usual repeatability error margin.

5. Shouldn't the code subtract out the freefall acceleration prior to calculating the average acceleration?  Otherwise, wouldn't gravity skew the magnitude sanity checks?  I feel I must be missing something here.

It actually does. After figuring out the gravity vector, _save_gravity method applies gravity transform to the chip via set_transform method. And then it gets subtracted from all subsequent measurements.

7. I wonder if a slightly different approach might be simpler.  A move test could pause the toolhead for 200ms, move from `p1` to `p2`, and then pause for 200ms.  One can then subtract freefall data from the resulting accelerometer data and do "dead reckoning" on it to determine how far the accelerometer measured the toolhead movement.  (That is, do "dead reckoning" by integrating the data to velocity and then integrating again to a position.)  The scaling factors can then be calculated by comparing the measured distance travelled to the actual distance travelled.  The gain, I'd guess, is that requested move acceleration isn't particularly important to the calculation, and resonances shouldn't adversely impact the measurement as the only time cut-offs are after the toolhead has been given ample time to settle.  If I get a chance I'll try to play with some sample code for this.

Note that I thought about adding pauses between moves in my test. However, I ran into the problem that (I think) toolhead.dwell(x) method waits for all kinematic activity to stop before 'dwelling'. Which sort of makes sense. But input shaping adds 'tails' to the kinematic activity of the moves. And so this unpredictably changes the duration of the test when input shaping is enabled. I mean, it could be figured out - at the end of the day, toolhead computes that - but since the pause is actually unneeded due to how integration works, I decided to simply remove those pauses.

Separately, I am slightly worried that doing 2 passes of integration (a->v and v->x) will be more prone to numerical integration errors. I might be wrong here, but just naively thinking that one integration (averaging) pass should generally be more stable than two. Plus, I'd expect that it would be necessary to run the test for a bit longer (e.g. to run this move back and forth a few times). But if you want to take a look at this - of course, I'd happy if this approach proves more robust and we can replace that part of the test.

dmbutyugin avatar Nov 19 '21 18:11 dmbutyugin

@Sineos, thanks!

So, from the results of your test I can conclude that your accelerometer was already correctly aligned with the printer axes, and (according to the accelerometer) the X and Y axis are almost perfectly orthogonal - 89.3 degrees :) And of course, slight non-orthogonality could be a result of the measurements inaccuracy.

Link to the raw files and PNGs of 3 runs (in timestamp order):

1. Current Git

2. Current Git

3. This PR

So, I take it that the results are more or less unchanged after the calibration? There are some small differences in amplitude, but they may generally be a result of small non-repeatability of the test.

BTW, if anyone is willing to help testing this feature, it would be really helpful if you could run the calibration multiple times - in the same point, and maybe also in a few points (not far away, just maybe off by some value, e.g. +/- pi mm from the original point) - and report the measurement results for each of the attempt. This will help us understand the level of the noise of the test, how repeatable it is, and if it has any location biases.

dmbutyugin avatar Nov 19 '21 18:11 dmbutyugin

after the full migration to Python 3, I'm thinking we may make numpy installation a default

I agree.

So the integration goes more like [p_mid12 -> p_mid12] over p2

Ah, okay. So another way to describe the test is - the code integrates the accelerometer data from p_mid12 -> p_mid21 to obtain an estimated change in velocity. The final scale factors are then calculated by comparing measured_velocity_delta to requested_velocity_delta. That is clever. So, the actual turn around point (eg, p2) and the requested acceleration don't really matter. There is a chance of resonance skewing the actual velocity at p_mid12, but such a resonance is very likely to have the same impact to the actual velocity at p_mid21 - so they should cancel each other.

I don't think I'll bother with looking at integrating to a position.

After figuring out the gravity vector, _save_gravity method applies gravity transform to the chip via set_transform method.

Ah, okay. I figured I must have been missing something. Thanks.

-Kevin

KevinOConnor avatar Nov 19 '21 19:11 KevinOConnor

Ah, okay. So another way to describe the test is - the code integrates the accelerometer data from p_mid12 -> p_mid21 to obtain an estimated change in velocity. The final scale factors are then calculated by comparing measured_velocity_delta to requested_velocity_delta. That is clever. So, the actual turn around point (eg, p2) and the requested acceleration don't really matter. There is a chance of resonance skewing the actual velocity at p_mid12, but such a resonance is very likely to have the same impact to the actual velocity at p_mid21 - so they should cancel each other.

Yes, exactly, that's another way to think about it.

dmbutyugin avatar Nov 19 '21 20:11 dmbutyugin

So, I take it that the results are more or less unchanged after the calibration? There are some small differences in amplitude, but they may generally be a result of small non-repeatability of the test.

Exactly, the results line up within a few Hz for the recommendation compared to current Git and also compared to my previous results

BTW, if anyone is willing to help testing this feature, it would be really helpful if you could run the calibration multiple times - in the same point, and maybe also in a few points (not far away, just maybe off by some value, e.g. +/- pi mm from the original point) - and report the measurement results for each of the attempt. This will help us understand the level of the noise of the test, how repeatable it is, and if it has any location biases.

Click to expand!
 09:18:32
mcu: stepper_x:132948 stepper_y:-28114 stepper_z:-257225
stepper: stepper_x:169.000000 stepper_y:112.000000 stepper_z:10.000016
kinematic: X:169.000000 Y:112.000000 Z:10.000016
toolhead: X:169.000000 Y:112.000000 Z:10.000000 E:0.000000
gcode: X:169.000000 Y:112.000000 Z:10.010143 E:0.000000
gcode base: X:0.000000 Y:0.000000 Z:0.000000 E:0.000000
gcode homing: X:0.000000 Y:0.000000 Z:0.000000
09:19:13
Calibrating z axis
Detected gravity direction: -0.084324, 0.041371, 1.028172
SAVE_CONFIG command will update adxl345 configuration with gravity = -826.9,405.7,10082.9 parameter
Calibrating x axis
Detected x direction: 1.097918, -0.007395, 0.045593
Calibrating y axis
Detected y direction: -0.011582, 0.957212, 0.006967
Computing axes transform
SAVE_CONFIG command will also update adxl345 configuration with axes_transform =
0.907811012,0.010445708,0.074032505
0.008755689,1.045106986,-0.041334254
-0.040315133,-0.007544957,0.969597501
09:20:24
Calibrating z axis
Detected gravity direction: -0.084221, 0.041318, 1.027654
SAVE_CONFIG command will update adxl345 configuration with gravity = -825.9,405.2,10077.8 parameter
Calibrating x axis
Detected x direction: 1.057504, 0.005483, 0.025426
Calibrating y axis
Detected y direction: 0.003622, 0.940258, -0.019774
Computing axes transform
SAVE_CONFIG command will also update adxl345 configuration with axes_transform =
0.943771281,-0.002006999,0.077426745
-0.004473853,1.062649114,-0.043091861
-0.023436895,0.020497131,0.970345444

09:20:36
mcu: stepper_x:130388 stepper_y:-25554 stepper_z:-257195
stepper: stepper_x:168.000000 stepper_y:113.000000 stepper_z:10.001187
kinematic: X:168.000000 Y:113.000000 Z:10.001187
toolhead: X:168.000000 Y:113.000000 Z:10.001184 E:0.000000
gcode: X:168.000000 Y:113.000000 Z:10.010143 E:0.000000
gcode base: X:0.000000 Y:0.000000 Z:0.000000 E:0.000000
gcode homing: X:0.000000 Y:0.000000 Z:0.000000
09:21:01
Calibrating z axis
Detected gravity direction: -0.084397, 0.041291, 1.028240
SAVE_CONFIG command will update adxl345 configuration with gravity = -827.6,404.9,10083.6 parameter
Calibrating x axis
Detected x direction: 0.923181, -0.016442, 0.015173
Calibrating y axis
Detected y direction: 0.016476, 0.944344, 0.015395
Computing axes transform
SAVE_CONFIG command will also update adxl345 configuration with axes_transform =
1.081376482,-0.020326989,0.089574326
0.019538738,1.059263015,-0.040932674
-0.016249209,-0.015560022,0.971826750

09:21:19
mcu: stepper_x:127828 stepper_y:-22994 stepper_z:-257165
stepper: stepper_x:167.000000 stepper_y:114.000000 stepper_z:10.002357
kinematic: X:167.000000 Y:114.000000 Z:10.002357
toolhead: X:167.000000 Y:114.000000 Z:10.002359 E:0.000000
gcode: X:167.000000 Y:114.000000 Z:10.010143 E:0.000000
gcode base: X:0.000000 Y:0.000000 Z:0.000000 E:0.000000
gcode homing: X:0.000000 Y:0.000000 Z:0.000000
09:21:43
Calibrating z axis
Detected gravity direction: -0.084448, 0.041499, 1.027709
SAVE_CONFIG command will update adxl345 configuration with gravity = -828.1,407.0,10078.4 parameter
Calibrating x axis
Detected x direction: 1.137504, -0.010086, 0.015716
Calibrating y axis
Detected y direction: -0.007670, 0.958323, 0.016936
Computing axes transform
SAVE_CONFIG command will also update adxl345 configuration with axes_transform =
0.878175044,0.005757079,0.071927859
0.009831084,1.044299028,-0.041361353
-0.013591367,-0.017297265,0.972620128

09:21:54
mcu: stepper_x:125268 stepper_y:-20434 stepper_z:-257135
stepper: stepper_x:166.000000 stepper_y:115.000000 stepper_z:10.003527
kinematic: X:166.000000 Y:115.000000 Z:10.003527
toolhead: X:166.000000 Y:115.000000 Z:10.003524 E:0.000000
gcode: X:166.000000 Y:115.000000 Z:10.010143 E:0.000000
gcode base: X:0.000000 Y:0.000000 Z:0.000000 E:0.000000
gcode homing: X:0.000000 Y:0.000000 Z:0.000000
09:22:20
Calibrating z axis
Detected gravity direction: -0.084312, 0.041418, 1.027813
SAVE_CONFIG command will update adxl345 configuration with gravity = -826.8,406.2,10079.4 parameter
Calibrating x axis
Detected x direction: 0.858383, -0.005606, 0.031145
Calibrating y axis
Detected y direction: 0.005452, 0.953433, 0.015253
Computing axes transform
SAVE_CONFIG command will also update adxl345 configuration with axes_transform =
1.161459403,-0.008171165,0.095604318
0.008363380,1.049459443,-0.041604454
-0.035318652,-0.015326373,0.970660227

09:22:35
mcu: stepper_x:112468 stepper_y:-7634 stepper_z:-257020
stepper: stepper_x:161.000000 stepper_y:120.000000 stepper_z:10.008014
kinematic: X:161.000000 Y:120.000000 Z:10.008014
toolhead: X:161.000000 Y:120.000000 Z:10.008008 E:0.000000
gcode: X:161.000000 Y:120.000000 Z:10.010143 E:0.000000
gcode base: X:0.000000 Y:0.000000 Z:0.000000 E:0.000000
gcode homing: X:0.000000 Y:0.000000 Z:0.000000
09:23:01
Calibrating z axis
Detected gravity direction: -0.084323, 0.041558, 1.027850
SAVE_CONFIG command will update adxl345 configuration with gravity = -826.9,407.5,10079.8 parameter
Calibrating x axis
Detected x direction: 1.111135, -0.019136, 0.023828
Calibrating y axis
Detected y direction: 0.011476, 0.943923, 0.015969
Computing axes transform
SAVE_CONFIG command will also update adxl345 configuration with axes_transform =
0.898180077,-0.012174784,0.074177628
0.019138241,1.059873999,-0.041283204
-0.021119339,-0.016183973,0.971826271

09:23:20
mcu: stepper_x:109908 stepper_y:-5074 stepper_z:-257006
stepper: stepper_x:160.000000 stepper_y:121.000000 stepper_z:10.008560
kinematic: X:160.000000 Y:121.000000 Z:10.008560
toolhead: X:160.000000 Y:121.000000 Z:10.008574 E:0.000000
gcode: X:160.000000 Y:121.000000 Z:10.010143 E:0.000000
gcode base: X:0.000000 Y:0.000000 Z:0.000000 E:0.000000
gcode homing: X:0.000000 Y:0.000000 Z:0.000000
09:23:44
Calibrating z axis
Detected gravity direction: -0.084241, 0.041375, 1.027804
SAVE_CONFIG command will update adxl345 configuration with gravity = -826.1,405.8,10079.3 parameter
Calibrating x axis
Detected x direction: 0.942824, -0.013624, 0.023581
Calibrating y axis
Detected y direction: 0.002565, 0.953082, 0.008658
Computing axes transform
SAVE_CONFIG command will also update adxl345 configuration with axes_transform =
1.058417916,-0.003638201,0.086896323
0.016190046,1.049556291,-0.040923975
-0.024419392,-0.008757722,0.971299460

09:24:32
mcu: stepper_x:-197292 stepper_y:174126 stepper_z:-258193
stepper: stepper_x:40.000000 stepper_y:191.000000 stepper_z:9.962253
kinematic: X:40.000000 Y:191.000000 Z:9.962253
toolhead: X:40.000000 Y:191.000000 Z:9.962262 E:0.000000
gcode: X:40.000000 Y:191.000000 Z:10.010143 E:0.000000
gcode base: X:0.000000 Y:0.000000 Z:0.000000 E:0.000000
gcode homing: X:0.000000 Y:0.000000 Z:0.000000
09:24:58
Calibrating z axis
Detected gravity direction: -0.082982, 0.040114, 1.027720
SAVE_CONFIG command will update adxl345 configuration with gravity = -813.8,393.4,10078.5 parameter
Calibrating x axis
Detected x direction: 0.948745, -0.007422, 0.043407
Calibrating y axis
Detected y direction: 0.007406, 0.970404, 0.005319
Computing axes transform
SAVE_CONFIG command will also update adxl345 configuration with axes_transform =
1.050063458,-0.008480788,0.085117375
0.009866316,1.030639566,-0.039431739
-0.044401404,-0.004975891,0.969636849

09:25:25
mcu: stepper_x:212308 stepper_y:-209874 stepper_z:-259491
stepper: stepper_x:200.000000 stepper_y:41.000000 stepper_z:9.911616
kinematic: X:200.000000 Y:41.000000 Z:9.911616
toolhead: X:200.000000 Y:41.000000 Z:9.911603 E:0.000000
gcode: X:200.000000 Y:41.000000 Z:10.010143 E:0.000000
gcode base: X:0.000000 Y:0.000000 Z:0.000000 E:0.000000
gcode homing: X:0.000000 Y:0.000000 Z:0.000000
09:25:50
Calibrating z axis
Detected gravity direction: -0.084243, 0.041275, 1.027965
SAVE_CONFIG command will update adxl345 configuration with gravity = -826.1,404.8,10080.9 parameter
Calibrating x axis
Detected x direction: 0.928608, -0.007538, 0.021667
Calibrating y axis
Detected y direction: 0.000487, 0.963350, 0.012876
Computing axes transform
SAVE_CONFIG command will also update adxl345 configuration with axes_transform =
1.074810161,-0.001721965,0.088151319
0.009386218,1.038587020,-0.040932151
-0.022771934,-0.012973232,0.971450230

Sineos avatar Nov 20 '21 09:11 Sineos

I have tried this and everything went really well, I'll include the klippy.log relevant to when I was measuring.

I wasn't going to comment on this but then I hit the following snag, I've upgraded the RPi to the Bullseye version, not the buster version, which is the latest you'll be downloading when using the Raspberry Imager tool.

In this version of the OS both python-numpy and python-mathplotlib are not present and you are unable to run the extra scripts included (calibrate_shaper.py and graph_accelerometer.py) to generate the graphs. However measuring and getting a response in the console works perfectly fine (probably due to the env numpy installed via pip). Just wanted to give this heads up first. Changing the first line of the calibrate_shaper.py file from: #!/usr/bin/env python2 to: #!/usr/bin/env python3 Seems to have worked (after installing python3-numpy and python3-mathplotlib) however I don't think I was supposed to be doing that. I'm also unsure whether this is the right place to give you a notice of this. If I'm in the wrong place, please point me to the correct place where I can re-add this issue. klippy.log

LeandroMarceddu avatar Nov 20 '21 12:11 LeandroMarceddu

In this version of the OS both python-numpy and python-mathplotlib are not present and you are unable to run the extra scripts included (calibrate_shaper.py and graph_accelerometer.py) to generate the graphs.

Just call the script via

python3 ~/klipper/scripts/calibrate_shaper.py ...

Maybe worth a note in the docs during the transition between Python 2 and 3

Sineos avatar Nov 20 '21 12:11 Sineos

Maybe worth a note in the docs during the transition between Python 2 and 3

While that might work it's not so much the issue. The issue is that we'll be seeing other users reporting the same thing due to Raspberry images now being Bullseye and not Buster.

Doing what you suggested also involves installing python3-numpy and -mathplotlib, fyi.

LeandroMarceddu avatar Nov 20 '21 12:11 LeandroMarceddu

The issue is that we'll be seeing other users reporting the same thing due to Raspberry images now being Bullseye and not Buster.

While this might be true, there are hundreds or even thousands of Klipper installs that are not based on Bullseye.

Sineos avatar Nov 20 '21 14:11 Sineos

Unfortunately, the problems with the script are due to Python2 being EOLed. Fortunately, both calibrate_shaper.py and graph_accelerometer.py are fully Python3-compatible. I checked and it seems that python3-numpy and python3-matplotlib are available at least up to stretch Debian release. So, maybe I should just update the installation instructions and the script to use Python3 by default. It doesn't look like this will break any existing installations badly - in the worst case, users will have to install python3 versions of numpy and matplotlib, which is rather fast when installed from repository.

dmbutyugin avatar Nov 21 '21 23:11 dmbutyugin

@Sineos, thanks for doing more tests!

Gravity direction detection seems to be working very well, e.g.

Detected gravity direction: -0.084448, 0.041499, 1.027709
Detected gravity direction: -0.082982, 0.040114, 1.027720
Detected gravity direction: -0.084243, 0.041275, 1.027965

Unfortunately, it seems that detecting direction in motion works less reliably in your case:

Detected x direction: 0.923181, -0.016442, 0.015173
Detected x direction: 0.948745, -0.007422, 0.043407
Detected x direction: 1.111135, -0.019136, 0.023828

with a whooping delta of ~0.19, and more reasonable for Y, but still:

Detected y direction: -0.007670, 0.958323, 0.016936
Detected y direction: 0.016476, 0.944344, 0.015395
Detected y direction: 0.007406, 0.970404, 0.005319
Detected y direction: 0.000487, 0.963350, 0.012876

I wonder why could that be? *besides, obviously, method being unreliable. I've added an optional OUTPUT parameter to the ACCELEROMETER_CALIBRATE command, which would write the raw accelerometer output to /tmp/ with that filename. Could you please run a few times the test again, but this time specifying this parameter (changing it appropriately, so the output isn't overwritten by different runs) and post the results?

dmbutyugin avatar Nov 21 '21 23:11 dmbutyugin

Note: The branch that is referenced for this PR is: https://github.com/dmbutyugin/klipper/tree/accel-calibrate The branch where the output command has been added is: https://github.com/dmbutyugin/klipper/tree/resonance-test So I went with the resonance-test, hoping this was correct

I went across my bed diagonally and took the respective respective measurement

No stepper_x stepper_y stepper_z Gravity 1 Gravity 2 Gravity 3 Detected X 1 Detected X 2 Detected X 3 Detected Y 1 Detected Y 2 Detected Y 3
1 25 7 9,906778 -0,082895 0,040051 1,028623 1,017847 0,000847 0,033251 0,012229 0,96162 -0,008558
2 35 17 9,911499 -0,083154 0,040155 1,028621 0,992723 -0,010688 0,018474 0,005379 0,979626 -0,02318
3 45 27 9,922929 -0,08331 0,040027 1,02895 1,019833 -0,004417 0,00998 0,00727 0,967688 0,01695
4 55 37 9,934984 -0,083068 0,040111 1,02856 0,986658 -0,003768 0,030925 0,008267 0,965147 -0,002305
5 65 47 9,948053 -0,083381 0,040159 1,029356 1,04673 -0,008544 0,041392 0,005102 0,980467 0,005441
6 75 57 9,967871 -0,083285 0,040383 1,028929 1,022943 -0,010889 0,02893 0,017465 0,962392 -0,015629
7 85 67 9,983514 -0,083206 0,040204 1,029566 1,039279 -0,004937 0,048503 0,003196 0,946098 -0,000594
8 95 77 9,994789 -0,082776 0,040263 1,029805 0,985432 -0,011133 0,037906 0,008698 0,950752 -0,008747
9 105 87 10,004893 -0,083025 0,04035 1,029306 1,054679 -0,009271 0,010923 0,018242 0,92963 -0,025668
10 115 97 10,009574 -0,08291 0,040771 1,02945 1,016372 -0,01237 0,010796 0,023711 0,971254 -0,026415
11 135 117 10,011681 -0,08241 0,040926 1,029829 1,008219 -0,004324 0,013935 0,001343 0,963999 -0,000718
12 145 127 10,012695 -0,08305 0,041583 1,029027 1,004138 -0,013729 0,014595 0,005832 0,956253 -0,031211
13 155 137 10,012968 -0,082694 0,041441 1,029106 1,005826 -0,000782 0,022782 0,01278 0,95986 0,005339
14 165 147 10,011213 -0,082685 0,041706 1,029249 0,995553 -0,007262 0,032393 0,004279 0,956411 0,006594
15 175 157 10,003098 -0,082577 0,041548 1,029763 1,017841 -0,01103 0,01428 0,006686 0,986026 -0,031201
16 185 167 9,989873 -0,082504 0,04192 1,0291 1,085289 -0,004978 0,001093 0,013378 0,943198 -0,02412
17 195 177 9,97856 -0,082816 0,04231 1,028781 1,027493 -0,008452 0,017454 0,005407 0,959219 -0,03026
18 205 187 9,970406 -0,082695 0,042242 1,028907 1,0641 -0,014923 0,02916 0,002143 0,979318 0,002921
19 215 197 9,959951 -0,083341 0,043076 1,029209 1,032261 -0,003051 0,023767 0,01874 0,959452 -0,008451
20 225 207 9,948677 -0,083014 0,042827 1,029373 1,068495 -0,005056 0,029867 0,012138 0,931593 -0,02542
21 235 217 9,942591 -0,083163 0,042727 1,029663 1,036765 -0,002424 0,031403 -0,00097 0,969014 -0,05851
22 245 227 9,940289 -0,083412 0,043069 1,028988 1,06141 0,004086 0,01594 0,023891 0,96077 -0,02097

Side notes:

  • Klipper output, csv and log attached as zip
  • Idea was to go by +10 for x and +10 for y but I messed up at one place with x (Step 10 to 11)
  • What is hitting me strange is the fluctuation of the z values (no movement command involving Z has been issued)

20211122.zip

Edit: Is the Z fluctuation my bed mesh?

Sineos avatar Nov 22 '21 10:11 Sineos

Given that I did not botch in the calculation somewhere, this does not look too bad or is it that sensitive? At least my printer is not causing any Gravity fluctuation in the space-time continuum. Sort of comforting to know 😉

Gravity 1 Min Gravity 1 Max Gravity 1 Range Gravity 1 Dev
-0,083412 -0,08241 0,001002 0,00029
Gravity 2 Min Gravity 2 Max Gravity 2 Range Gravity 2 Dev
0,040027 0,043076 0,003049 0,001064
Gravity 3 Min Gravity 3 Max Gravity 3 Range Gravity 3 Dev
1,02856 1,029829 0,001269 0,000377
Detected X 1 Min Detected X 1 Max Detected X 1 Range Detected X 1 Dev
0,985432 1,085289 0,099857 0,02722
Detected X 2 Min Detected X 2 Max Detected X 2 Range Detected X 2 Dev
-0,014923 0,004086 0,019009 0,00478
Detected X 3 Min Detected X 3 Max Detected X 3 Range Detected X 3 Dev
0,001093 0,048503 0,04741 0,011533
Detected Y 1 Min Detected Y 1 Max Detected Y 1 Range Detected Y 1 Dev
-0,00097 0,023891 0,024861 0,006914
Detected Y 2 Min Detected Y 2 Max Detected Y 2 Range Detected Y 2 Dev
0,92963 0,986026 0,056396 0,01426
Detected Y 3 Min Detected Y 3 Max Detected Y 3 Range Detected Y 3 Dev
-0,05851 0,01695 0,07546 0,017185

Sineos avatar Nov 22 '21 11:11 Sineos

Note: The branch that is referenced for this PR is: https://github.com/dmbutyugin/klipper/tree/accel-calibrate The branch where the output command has been added is: https://github.com/dmbutyugin/klipper/tree/resonance-test So I went with the resonance-test, hoping this was correct

Yes, I pushed the commit to that branch to test on my printer, but then I forgot to update this branch, sorry for confusion. But you did right, thanks!.

Edit: Is the Z fluctuation my bed mesh?

Yes, I think so.

Given that I did not botch in the calculation somewhere, this does not look too bad or is it that sensitive?

Well, gravity looks good (less than 0.5% error range (values in vector are already normalized to 1.0 scale)). Y looks not particularly good (with 6-8% error range, for 2nd and 3rd coordinates), and X looks quite bad (up to 10% error range this time, and ~19% in your previous runs). I mean, it is not 50% error rate, but as it is, it is not very reassuring.

BTW, I've looked at the test results you've shared. It seems that the measurements at the end of the test are incomplete: incomplete *and the total test duration should be at least 6.197 seconds, but the measurements stop at ~6.120 seconds. It probably means that the end of measurements stops not very precisely. It doesn't matter for the resonance testing, but here it may play a role. I'll need to fix that.

Separately, I'll also try to run the test in spreadcycle mode. I use StealthChop on my printer, and so far, I've been running tests in that mode. I wonder if the spreadcycle mode could also add a lot of noise?

dmbutyugin avatar Nov 23 '21 21:11 dmbutyugin

@Sineos, BTW, I tried to fix the missing measurements at the end of test (in the correct accel-calibrate branch), you can give it a try when you have an opportunity.

dmbutyugin avatar Nov 24 '21 01:11 dmbutyugin

Here you go: https://docs.google.com/spreadsheets/d/1BkuvFhTMRITxp5038nj7qZtvbPXLoude6Crlw0jRCmc/edit?usp=sharing https://drive.google.com/file/d/1rvJA_ny2evd5NFmxr2WSFUelVwZnKp_7/view?usp=sharing

Noteworthy: I have been experiencing multiple

adxl345 measured spurious acceleration on x axis: 603.709 vs 500.000 (mm/sec^2)
adxl345 measured spurious acceleration on x axis: 605.678 vs 500.000 (mm/sec^2)
adxl345 measured spurious acceleration on x axis: 603.518 vs 500.000 (mm/sec^2)
adxl345 measured spurious acceleration on x axis: 600.761 vs 500.000 (mm/sec^2)

Never seen this before. Strange also that the spurious value always is around ~600. Maybe my ADXL is going bad. In any case I have ordered a new one

Sineos avatar Nov 24 '21 17:11 Sineos

Going for a (hopefully high quality) ADXL this time: https://www.adafruit.com/product/1231 Is there anything against the Adafruit version?

https://www.adafruit.com/product/4097 could this be an interesting alternative?

Sineos avatar Nov 25 '21 20:11 Sineos

I have the adafruit one, it is high quality and works great.

Gilabite avatar Nov 25 '21 20:11 Gilabite

Test have been repeated with the new ADXL: https://drive.google.com/file/d/1eHwdA0-ubEJ28hVajCxOjaglh4is-UG0/view?usp=sharing

Results have been amended to: https://docs.google.com/spreadsheets/d/1BkuvFhTMRITxp5038nj7qZtvbPXLoude6Crlw0jRCmc/edit?usp=sharing as Spread Cycle New ADXL / Spread Cycle New ADXL 2 Chose to make 2 runs to see potential differences

Noteworthy:

  • The spurious acceleration errors did not pop up again
  • The Adafruit ADXL435 behaved like an ill-tempered drama queen. With very short jumper cables it worked out of the box. Any longer cable would lead to errors. Crimped like 4 different cables and only a slaughtered CAT 5E patch cable was good enough

Sineos avatar Nov 30 '21 16:11 Sineos

@Sineos thanks! TBH, Adafruit board offers other components of higher quality and has logic level shifter. And there is little to no chance that, for example, capacitors will be soldered in the wrong polarity :) But otherwise the chip is likely the same adxl345 chip from Analog devices.

Spurious acceleration values around ~600 are simply a result of an error threshold set somewhat arbitrarily at 20%. So, 500 expected acceleration times 1.2 (100% + 20%) equals to 600. It seems your results were on the boundary of that threshold, often exceeding them. I'll take a look at your new results though.

BTW, separately, I tested the sensor calibration on my printer when the steppers are in SpreadCycle mode. Incidentally, I also got spurious acceleration on one of the axis (so the measurement results are quite off). So at least I have something to test personally.

A few notable things: I have a problem on the Y axis where motor does not work very well in SpreadCycle (namely, it is very loud in this mode; I suppose the SpreadCycle parameters set by default are not good for this stepper). Then, it seems that I hit one of the resonance frequencies of Y axis:

Y_accel_res Y_accel_res

The accelerometer is mounted with Z axis inverted (looking down), but X is X, Y is Y and Z is Z otherwise. So the resonance is in really in Z direction. I thought that perhaps the resonance throws the measurements off, but it appears that resonances on Z axis does not have an influence here:

measured_acccel = 564.706
accel_dir = [  9.84480035 563.96018375 -27.29518106]

So, it is really the integration of Y axis gives bogus results. I'll need to look more into it.

dmbutyugin avatar Dec 02 '21 01:12 dmbutyugin

Together with the ADXL345, I ordered an Adafruit ADXL343: https://drive.google.com/file/d/1gvg6gi2u50JhvRilwEVVx2VdVgCIwuAe/view?usp=sharing

Results have been amended to: https://docs.google.com/spreadsheets/d/1BkuvFhTMRITxp5038nj7qZtvbPXLoude6Crlw0jRCmc/edit?usp=sharing

Noteworthy:

  • Seems to work as drop in replacement
  • No spurious acceleration errors during two runs (44 measurements)
  • Axes noise seems a lot lower: Axes noise for xy-axis accelerometer: 18.575276 (x), 16.907456 (y), 24.822339 (z)

Sineos avatar Dec 02 '21 11:12 Sineos

Running ACCELEROMETER_CALIBRATE fails with:

Traceback (most recent call last):
  File "/opt/klipper/klippy/webhooks.py", line 245, in _process_request
    func(web_request)
  File "/opt/klipper/klippy/webhooks.py", line 415, in _handle_script
    self.gcode.run_script(web_request.get_str('script'))
  File "/opt/klipper/klippy/gcode.py", line 217, in run_script
    self._process_commands(script.split('\n'), need_ack=False)
  File "/opt/klipper/klippy/gcode.py", line 199, in _process_commands
    handler(gcmd)
  File "/opt/klipper/klippy/gcode.py", line 136, in <lambda>
    func = lambda params: origfunc(self._get_extended_params(params))
  File "/opt/klipper/klippy/gcode.py", line 145, in <lambda>
    handler = lambda gcmd: self._cmd_mux(cmd, gcmd)
  File "/opt/klipper/klippy/gcode.py", line 303, in _cmd_mux
    values[key_param](gcmd)
  File "/opt/klipper/klippy/extras/adxl345.py", line 394, in cmd_ACCELEROMETER_CALIBRATE
    if not output.replace('-', '').replace('_', '').isalnum():
AttributeError: 'NoneType' object has no attribute 'replace'

Looks like it expects an OUTPUT argument.

wlhlm avatar Dec 04 '21 23:12 wlhlm

So, I have been looking into the available data and wanted to share my findings and thoughts.

As the toolhead moves at a constant speed, the steppers create some periodic forces (due to imbalances in rotor, differences between different windings/magents, etc.) that affect toolhead motion. Unfortunately, my analysis that I posted on Discourse indicates that these forces do not just depend on the position of the stepper, but also on the direction it rotates. This makes the test that calibrates the accelerometer scaling unreliable, because it can pick up some systematic distortions that affect the results in unpredictable ways, which Sineos and I experienced in SpreadCycle mode.

To give a more illustrative (though theoretical) example, let's assume the test tries to move the toolhead back and forth with the velocity of 10 mm/sec. So, then the toolhead, supposedly, accelerates from 0 to 10 mm/sec, cruises for some time, decelerates to 0, accelerates in reverse direction, and so forth. The test tries to measure the speed difference between middle points during cruising by integrating accelerations. So, supposedly it can measure the acceleration as (10 - (-10)) / (T / 2), where T is the full period of the motion.

Unfortunately, it is possible that the speed at the middle of cruising will not be 10 mm/sec exactly due to non-linearities of the stepper motion. However, I was hoping that the speed will deviate from the expected one consistently, e.g. in the forward motion it may be 12 mm/sec, and in the backwards motion - 8 mm/sec. Then the total speed delta would still be 20 mm/sec (on average), and calculations for acceleration scale would still work correctly. But it seems that we can get something like 12 mm/sec in the forward motion, and 11 mm/sec in the backward motion, or 11 mm/sec forward and 8 mm/sec backwards. And these results will be reproducible over multiple periods of the test. Then we will measure not the expected acceleration, but some different value. Which will be not a noise or error of the measurements themselves, but rather an error of methodology of the measurements.

So, to summarize: I think the current approach to calculate and re-scale accelerometer measurements is susceptible to systematic errors and is therefore not reliable. And I suspect that if we try instead to move the toolhead from A to B and integrate the position instead of velocity (as previously suggested by Kevin), we may run into similar issues, which wouldn't allow us to reliably determine the scale of accelerometer readings.

Note that this only affects the measurements of the accelerometer scale. In my experiments, the direction could still be determined pretty reliably. Therefore, there is an option to amend this test to only detect direction, but keep the original scale and normalize all detected direction vectors to 1.0 norm. Separately, we could try to keep the coordinate transformation for the accelerometer as orthonormal as possible (e.g. when both X and Y directions are available, make Z orthogonal to them, and when only one direction is available, e.g. on a bed slinger, use gravity vector to compute the direction of the missing axis, and then make Z orthogonal to the two generated axis instead of aligning it with the gravity direction).

@Sineos, @KevinOConnor I'd like to get your thoughts on this matter. And what do you think about this accelerometer calibration, given its limitations? I personally think that it might be still useful to the users even in its more limited form, but maybe you have other perspectives, and it'd be better to arrive to some consensus before attempting any code changes.

dmbutyugin avatar Feb 01 '22 19:02 dmbutyugin

Hi,

If you want my 2 cents, I do think that it would be good to add basic accelerometer calibration to Klipper. A basic calibration makes it easier to perform other analysis later on. Calibration doesn't seem to impact input_shaper calibration, but poor calibration can complicate the development of other future tools. From my perspective, most important is calibrating the sensor direction relative to the toolhead coordinate system and removing static accelerometer bias (eg, gravity). It doesn't seem that calibrating magnitude is as important (accelerometer noise will demand some form of compensation even if one were to perfectly calibrate the magnitude). YMMV.

I suspect that if we try instead to move the toolhead from A to B and integrate the position instead of velocity (as previously suggested by Kevin), we may run into similar issues, which wouldn't allow us to reliably determine the scale of accelerometer readings.

FWIW, I think integrating to a position may have problems with accelerometer noise, but I'd be surprised that it would be noticeably impacted by internal stepper forces (eg, detent forces). I think if we command the toolhead from X10 to X110 then I think we can safely use a movement distance of 100mm in the calculations. It is true that, depending on choice of start and end location, the actual distance might have a systemic bias - for example 99.990mm instead of 100mm. However, I'm confident accelerometer noise will be much much greater anyway. Said another way, stepper detent forces cause position errors in the microns, but the accelerometer can't "dead reckon" to a position that precise for it to matter.

Separately, we could try to keep the coordinate transformation for the accelerometer as orthonormal as possible

FWIW, I'm not sure that matters much. If it was me, I guess I'd implement whatever was simplest. That said, when only X is measurable, your proposal to use gravity to find an "orthogonal Y" and then calculate an "orthogonal Z" from XY does seem like a nice solution.

Cheers, -Kevin

KevinOConnor avatar Feb 01 '22 23:02 KevinOConnor

FYI, I decided to try implementing an "integrate to position" test. The code is at: https://github.com/KevinOConnor/klipper-dev/tree/work-adxlcal-20220203 . I didn't implement a full calibration routine - just enough code to see if an "integrate to position" test produces stable results.

The test is pretty simple - it moves the toolhead a distance along a given toolhead axis and calculates the distance travelled as measured by the accelerometer on each of the accelerometer's axes. It moves back and forth 3 times. The results for each run are reported.

Unfortunately, it seems the results (at least on my Voron Zero test printer) are not particularly stable.

Here's the output from a couple of test runs:

G28
G1 X10 Y60 Z30
ACCELEROMETER_CALIBRATE axis=x distance=100 velocity=100 chip=adxl345

1: b=-308.789,-1248.316,9322.370 p=-102.847,-23.122,16.798 d=106.744
2: b=-344.300,-1248.090,9296.977 p=108.382,-20.825,-21.189 d=112.380
3: b=-359.524,-1223.377,9305.843 p=-136.020,-19.332,1.510 d=137.395
4: b=-334.702,-1272.595,9308.141 p=119.294,-78.054,-12.294 d=143.090
5: b=-342.738,-1215.038,9289.534 p=-125.655,-4.485,-17.944 d=127.009

ACCELEROMETER_CALIBRATE axis=x distance=100 velocity=100 chip=adxl345

1: b=-359.004,-1198.099,9287.911 p=-143.623,23.747,-9.041 d=145.853
2: b=-338.889,-1221.614,9325.260 p=111.350,17.303,9.104 d=113.054
3: b=-357.513,-1219.669,9289.736 p=-141.559,10.105,-14.594 d=142.667
4: b=-362.213,-1201.137,9308.005 p=77.720,38.267,-2.526 d=86.667
5: b=-324.727,-1215.530,9296.271 p=-109.945,4.198,-6.306 d=110.206

G1 X60 Y10
ACCELEROMETER_CALIBRATE axis=y distance=100 velocity=100 chip=adxl345

1: b=-395.162,-1194.951,9376.616 p=-54.440,-53.355,-54.068 d=93.455
2: b=-343.929,-1249.818,9309.342 p=-4.051,20.545,34.591 d=40.436
3: b=-353.827,-1205.646,9387.570 p=-23.272,-69.102,-43.401 d=84.854
4: b=-464.253,-1243.329,9256.429 p=-208.807,36.861,-48.020 d=217.406
5: b=-314.698,-1205.850,9365.993 p=27.862,-84.245,-54.346 d=104.053

ACCELEROMETER_CALIBRATE axis=y distance=100 velocity=100 chip=adxl345

1: b=-322.259,-1236.096,9339.885 p=-12.519,-90.371,-60.660 d=109.560
2: b=-428.438,-1219.865,9263.955 p=-132.269,76.181,-33.890 d=156.356
3: b=-362.720,-1140.222,9389.160 p=-19.248,-31.511,-33.682 d=49.979
4: b=-303.742,-1275.266,9278.723 p=52.338,11.138,-7.414 d=54.021
5: b=-308.361,-1221.295,9355.873 p=9.337,-82.731,-50.918 d=97.592

Ideally, if the test was stable the final d= value (the total distance travelled as measured by the accelerometer) on each line would have little variance from run to run (and ideally be near the requested 100mm). However, it seems the results tend to vary significantly from run to run.

It's possible that setting up a max_accel value so that the test move was always in acceleration or deceleration (and not cruising) may improve the results. (This test was run with max_accel=2000.) It's also possible that averaging the results over many runs would produce acceptable results.

-Kevin

KevinOConnor avatar Feb 04 '22 18:02 KevinOConnor