Weird sky, faraway object prediction when finetuning on nuScenes dataset
Thank you for open-sourcing this excellent work!
I'm trying to fine-tune the model on nuScenes dataset, but it shows some weird results. The white place is some faraway buildings(no lidar) or sky part. It just predicts wrong depth value. Do I need to add some extra information like sky mask to solve that?
Sky region depth estimation is really hard for our model. This is because our training data do not enclose much such supervision. I recommend using a semantic segmentation model to mask out the sky and enforce an explicit sky loss on them when you fine-tune the model.
Sky region depth estimation is really hard for our model. This is because our training data do not enclose much such supervision. I recommend using a semantic segmentation model to mask out the sky and enforce an explicit sky loss on them when you fine-tune the model.
Why does fine-tuning on KITTI doesn't cause such problems? In my understanding, the KITTI should also share the same problem.
Could you provide some hints about it?