allwpilib
allwpilib copied to clipboard
Implement PoseEstimator.getEstimatedPosition(double timestampSeconds)
Our vision-based localizer uses the gyro-derived robot rotation measurement as part of the transformation between camera-derived measurements (april tag poses) and robot coordinates, instead of the (less-accurate) camera-derived rotation measurement. Because the camera input is substantially delayed, these transformations are wrong unless the robot is stationary. Since our cameras are working pretty well while the robot is moving, we've noticed this skew more this year.
To fix it, we can't maintain our own copy of the gyro measurement, because the offset is maintained privately by PoseEstimator. We made our own version of PoseEstimator which exposes poseBuffer.getSample(), and that's a fine solution for us, but it might benefit other teams to be able to use the WPI PoseEstimator with this addition.
Thanks!
I didn't understand what you wrote initially, so I tried to rewrite it. Let me know if it's not an accurate representation.
Our vision-based localizer uses the gyro-derived robot heading measurement in the camera-to-tag transformation instead of the (less accurate) camera-derived robot heading measurement.
Because the camera input is substantially delayed, the camera-to-tag transformations are wrong unless the robot is stationary. Since our cameras are working pretty well while the robot is moving, we've noticed this skew more this year.
To fix it, we can't maintain our own copy of the gyro measurement, because the offset is maintained privately by PoseEstimator. We made our own version of PoseEstimator which exposes poseBuffer.getSample(), and that's a fine solution for us, but it might benefit other teams to be able to use the WPI PoseEstimator with this addition.
I don't understand how exposing the internal pose buffer fixes the problem.
that way, we can use the robot rotation measurement corresponding to the time of the vision input.
the specific calculation that depends on the gyro is finding the tag rotation in the camera frame. the tag rotation is part of the tag pose in the camera frame, which is inverted to find the camera pose in the (known) tag frame.
does that make sense?
Where does the internal pose buffer come into play? I'd think that it'd be simpler to maintain a separate history of the gyro poses and then look those up to correct the vision measurement you add to the pose estimator- Am I missing something?
yeah, that's how i started, but then i realized i was just making a little mirror of the pose buffer, filled with data obtained from the pose estimator itself.
Based on your description, the data you want (gyro measurements) isn't obtained from the pose estimator- It's just obtained from the gyro, and since it's not tied to any pose estimator functionality, I'd rather users maintain their own TimeInterpolatableBuffer
.
As a side note, I looked through your team's code to try to understand what you mean (Is it okay for me to post the link? I'm guessing so since it's a public repo, but better safe than sorry), and I'm not sure the code matches your description, since it's grabbing the angle of the interpolated pose estimate (determined from odometry and vision measurements), not the interpolated gyro angle. (record.poseMeters.getRotation()
, not record.gyroAngle
) (And as a bonus, Optional.map()
is a more concise way to implement getSampledRotation()
, if you care about that sort of thing.)
oh, wow, thanks for the catch on the gyroAngle.
i'm confused about the gyro measurement vs the pose estimator rotation measurement. the pose estimator maintains an offset, so these aren't the same measurement. is there another way to get at the offset?
for the benefit of anyone listening in, the code is here. it's public, meant to be publicly-useful, thanks for checking.
https://github.com/Team100/all24/blob/976291462796b25b5bf2601da9f136901984fd8b/lib/src/main/java/org/team100/lib/copies/PoseEstimator100.java#L158
oh, wait, now i'm even more confused. record.poseMeters.getRotation() and record.gyroAngle are different for exactly the same reason.
The way adding vision measurements to the pose estimator works is that it gets the estimated state of the robot at some time in the past, shifts the pose estimate at that time to better match the vision pose estimate (specifically, get the twist from the old estimate to the vision estimate, scale it by the standard deviations, and then apply that scaled twist), and replay the updates after that time. There isn't a clean gyro offset that's applied- Instead, the vision measurements are just a series of nudges to the odometry measurements.
From the sounds of it, the functionality you want (know the gyro measurement at a time in the past) isn't something that the pose estimator is supposed to do- Instead, maintain a separate TimeInterpolatableBuffer<Rotation2d>
with your gyro measurements.
I get it, except for the part where you said "your gyro measurements." that's not what I need, I need the rotational component of the pose. isn't that exactly the history being kept by the pose estimator? I can't use the measurement from the gyro itself, because of the offset. I could ask the pose estimator for it's pose estimate in the current time, and remember it, but that's exactly what line 186 of PoseEstimator does: add a sample with getEsiimatedPosition.
I was thinking any creating an issue with a similar title but a different use case.
If we have a latent sensor input that we want to go from robot relative to field relative it would be nice to be able to ask "where was I 100ms ago"
Though I'm still a bit confused about the conversation about the use case mentioned (see below), I do agree that being able to sample the pose estimate at earlier timestamps would be helpful.
The stuff I'm confused about: It sounds like the intent is to make the vision measurements have no effect on the angle of the pose estimate, in which case the angle is exactly equal to the gyro angle (since that's the only thing affecting it), so I don't get what offset is being referred to. Maybe you're talking about the offset in the odometry object that's set when calling resetPosition()
? (See Odometry.java#L62 or Odometry.h#L57)
If we have a latent sensor input that we want to go from robot relative to field relative it would be nice to be able to ask "where was I 100ms ago"
As a more concrete example, a game piece detection camera can produce a robot-relative location of a piece, but with some non-negligible latency. To localize the game piece in field-relative space most accurately, it would be helpful to use the pose from when the frame was taken instead of the current pose.
Maybe you're talking about the offset in the odometry object that's set when calling
resetPosition()
? (See Odometry.java#L62 or Odometry.h#L57)
Yes, that's what I was thinking of.
If we have a latent sensor input that we want to go from robot relative to field relative it would be nice to be able to ask "where was I 100ms ago"
We could address that use case directly by providing a function like public Pose2d getEstimatedPosition(double timestampSeconds)
, which would still enforce the invariants on the pose buffer.
oh, yeah, that would be great.
I think that would solve my original use case as well.