flownet3d
flownet3d copied to clipboard
How do you get the point clouds of second frame for KITTI?
Thank you for your great work and code release! I found that the disparity of KITTI scene flow dataset is based on frame one, so that the disparity of pixels for the second frame can not be gotten directly. How dou you get them, from the raw lidar data or some method else?
If you use the raw lidar data, how could you get so dense depth values? If you did not use the raw lidar data, why do you use 150 frames rather than 200?