I got diifferent results on the evaluate set.
Hello, thanks for your wonderful work. I got the pretrained model and I ran the train.py to reproduce the dynamic-multiframe-depth, but I got different results from yours(resnet18-pretrained).
Paper Abs_rel / Sq_rel / rmse / rmse_log / a1 / a2 / a3 0.043 0.151 2.113 0.073 0.975 0.996 0.999
Own Abs_rel / Sq_rel / rmse / rmse_log / a1 / a2 / a3 0.126 0.893 4.552 0.19 0.833 0.94 0.981
And my torch==1.10.1+cu113,torchvision==0.11.2+cu113.
All indicators are quite different from those in the paper. I did not make changes in “trian_my_resnet18.json”, just replaced “n_gpus=8” with “n_gpus=3”.
- I want to know why my result is not good.
- In addition to the settings in “trian_my_resnet18.json”, what details do I need to pay attention to in order to reproduce the result.
Looking forward to your reply, thank you. Best wishes!
Hi, Thanks for your attention to our work. The results look weird, can you double-check if the scores are from dynamic area evaluation or full-image evaluation?