panoramic-depth-estimation
panoramic-depth-estimation copied to clipboard
Bad depth prediction results
Hi,
Thank you for this amazing work. I wanted to try your inference code on the proposed validation Carla dataset using one of your trained models. However when doing that, I get bad results that have nothing to do with the scores reported in your paper. For instance, when testing using the model in mixed_warp folder, I get the following results:
Abs. rel. & Sq. rel. & RMSE & RMSE log. & Depth acc < 1.25 0.810 & 67.533 & 8.520 & 1.427 & 0.051
The same thing goes for carla_warp and carla models Is there something I am missing ? Note that I used the same evaluation code available with your dataset. I will be grateful if you give some insights about what might cause the problem.
Best regards
Hi, that’s a bit strange. The values are completely off. I’ll try to replicate your results. It sounds like a scaling issue or a disparity/depth mismatch. When calling the evaluation script, did you specify this --pred-as-disparity
command line option?
Thank you for your answer. The following are the results when using the model in mixed_warp/ with/without the flag --pred-as-disparity : with Abs. rel. & Sq. rel. & RMSE & RMSE log. & Depth acc < 1.25 3.951 & 567.760 & 11.233 & 1.113 & 0.007 without Abs. rel. & Sq. rel. & RMSE & RMSE log. & Depth acc < 1.25 0.884 & 69.172 & 8.490 & 1.683 & 0.016
I checked the evaluation code and it seemed ok to me. The scores are indeed computed between the ground truth depth maps (transformed to disparities) and their corresponding predicted disparity maps.