V2V4Real
V2V4Real copied to clipboard
How should the transform matrix between two agents be correctly computed?
How should the transform matrix between two agents be correctly computed for coordinate-frame alignment?
The transform matrix is critical for applying an affine transformation to 2D intermediate features and aligning them to the ego frame—this is a practical requirement in real deployments.
Note that in the V2V4Real dataset the LiDAR pose representation does not follow a 6-DoF format (unlike OPV2V and related datasets). Accordingly, the procedure for computing the transform matrix may need to change. Please verify this discrepancy and revise the relevant code as necessary.