Dynamic Frames For Inference
Hi Zachary,
I am trying to check the depth performance of deepV2D with various frames. If I change the config of KITTI to Frames:2, it turns out that the network parameter of the motion predictor is mismatched. Do we have to re-train the network under the setting of two frames here?
Best
After checking the components, I think it may be ok to replace the third, fourth, and fifth frames by the image of the second frame, because DeepV2D takes an argmax strategy during depth estimation. I guess in this way the result should be comparable to two-frames setting. The result on the KITTI (~650 images) seems reasonable too:
sc-inv, a10, a1, a2, a3, rmse, log_rmse, rel, sq_rel1, sq_rel2, log10, 0.1048, 0.8459, 0.9544, 0.9871, 0.9951, 2.8348, 0.1062, 0.0608, 0.3068, 0.0199, 0.0255,