ardrone-autonomy
ardrone-autonomy copied to clipboard
Document performance of state estimation
Can you include a plot of how well the drone can position itself?
For instance: given a goal-position in 3D, what is the drone's actual position as it drive towards the goal, and how well does it hover at this location.
Good point, I'll add some links to the README.
You should have a look at this blog post, slides and videos to get a feeling of the precision: https://eschnou.com/entry/advanced-programming-with-nodecopter-62-25019.html
If you only base the estimate on the IMU, you'll get a drift that increases through time due to the noise on the accelerators/gyros/magneto. The solution is to also complement the estimate with a visual marker (e.g. i'm using tag detection from the bottom camera in my hover example).
Also note that the kalman filter is based on speed provided by the drone. This speed indication is based on optical flow tracking by the bottom camera. The precision of this input highly depends on 1) lighting conditions and 2) structure on the ground.
Look at the ground floor Parrot used for their demo at Le Bourget: https://www.youtube.com/watch?v=09KVqnGCxkc
To really improve on this, one would have to implement a complete VSLAM solution, based on processing images from the front camera. I've started some work on this but far from done.
Just by watching the video, it does look like your drone is doing a better job hovering in place. I should be able to improve my performance then.
Thank you for helping!
On Tue, Dec 17, 2013 at 10:52 AM, Laurent Eschenauer < [email protected]> wrote:
Good point, I'll add some links to the README.
You should have a look at this blog post, slides and videos to get a feeling of the precision:
https://eschnou.com/entry/advanced-programming-with-nodecopter-62-25019.html
If you only base the estimate on the IMU, you'll get a drift that increases through time due to the noise on the accelerators/gyros/magneto. The solution is to also complement the estimate with a visual marker (e.g. i'm using tag detection from the bottom camera in my hover example).
Also note that the kalman filter is based on speed provided by the drone. This speed indication is based on optical flow tracking by the bottom camera. The precision of this input highly depends on 1) lighting conditions and 2) structure on the ground.
Look at the ground floor Parrot used for their demo at Le Bourget: https://www.youtube.com/watch?v=09KVqnGCxkc
To really improve on this, one would have to implement a complete VSLAM solution, based on processing images from the front camera. I've started some work on this but far from done.
— Reply to this email directly or view it on GitHubhttps://github.com/eschnou/ardrone-autonomy/issues/5#issuecomment-30762393 .
-.- -.. ..--- . -.- - Benjamin
P.973.732.2170 C.973.534.6583 F.866.220.1301
The EKF implementation does not take GPS data into account as of yet, I think adding it will greatly help.
How far have you come with the VSLAM solution?