pandaset-devkit
pandaset-devkit copied to clipboard
What is the origin of ego coordinate system?
At the middle of front bumper, or center of vehicle, or where?
I am also interested in this question. For many semantic segmentation networks, we use cylindrical depth projections. Therefore, it is important to know whether these are vehicle coordinates or sensor coordinates and/or how to transform between these coordinate systems.
It would also be helpful if the data could be provided as raw data, i.e. in sensor coordinates without any motion compensation in the point cloud.
It would also be helpful if the data could be provided as raw data, i.e. in sensor coordinates without any motion compensation in the point cloud.
It would also help here #69
We can temporarily provide raw data of Pandar64 and a instruction of the raw data PandaSet Raw Data Instructions.pdf. The raw data can be download at https://drive.google.com/drive/folders/1ou_HqaQq0rGR0UrGZkFYWrE3mCJDosX8?usp=sharing
Would it be possible to also provide the raw data of the Pandar GT in a similar way? There are algorithms that work better and more efficiently on an organised point cloud, i.e a depth image, than on an unorganised point cloud.