Is there any reference code to generate kitti dataset annotation?
I ran project on kitti mot dataset, but the result was not as expected. I guess maybe the generation of kitti dataset annotation was not correct. As described in the project, we need the following annotations which should be converted from original dataset. 'rgb': 'data/kitti_demo/rgb/xxx.png', 'depth': 'data/kitti_demo/depth/xxx.png', 'depth_scale': 1000.0 # the depth scale of gt depth img.
Could anyone share some information on how to do that?
I ran project on kitti mot dataset, but the result was not as expected. I guess maybe the generation of kitti dataset annotation was not correct. As described in the project, we need the following annotations which should be converted from original dataset. 'rgb': 'data/kitti_demo/rgb/xxx.png', 'depth': 'data/kitti_demo/depth/xxx.png', 'depth_scale': 1000.0 # the depth scale of gt depth img.
Could anyone share some information on how to do that?
The depth scale should never be 1000.0 for kitti. It is 256.0 because we use original KITTI Depth datasets (eigen splits) for fine-tuning and testing.
I ran project on kitti mot dataset, but the result was not as expected. I guess maybe the generation of kitti dataset annotation was not correct. As described in the project, we need the following annotations which should be converted from original dataset. 'rgb': 'data/kitti_demo/rgb/xxx.png', 'depth': 'data/kitti_demo/depth/xxx.png', 'depth_scale': 1000.0 # the depth scale of gt depth img. Could anyone share some information on how to do that?
The depth scale should never be 1000.0 for kitti. It is 256.0 because we use original KITTI Depth datasets (eigen splits) for fine-tuning and testing.
您好,the depth scale意味着什么,我现在想用metric3d在我自己的数据集上做video metric depth estimation任务,我是室外的数据集,我可以按照kitti的形式生成一个json文件么,这个depth scale该怎么定,以及想问下哪个预训练模型效果针对室外自动驾驶数据集效果更好一点,可以使用和kitti测试脚本中一样的设置么,convlarge_hourglass_0.3_150_step750k_v1.1.pth