MiDaS icon indicating copy to clipboard operation
MiDaS copied to clipboard

MiDaS for ROS1 issue

Open jing98tian opened this issue 3 years ago • 3 comments

1、For MiDaS for ROS1 , If I want to get real depth, do I use"d=k*output(in range [0,255])+b" get the scaled inverse,finally "1/d" to get the real depth. Or remove the "normalization"part and code " output = output.sub(min_val).div(range_val).mul(255.0F).clamp(0, 255).to(torch::kF32); " to get the non-sclaled inverse depth.

2、In the same scene, the scale recovery coefficient is different. Do you have any good solutions?

Looking forward to your reply

jing98tian avatar Jun 30 '21 11:06 jing98tian

For 1: The safe way would be to remove the normalization entirely, then perform the process you described.

For 2: With MiDaS the only existing solutions is to align scale and shift based on some real depth measurements, similar as we do in our evaluation (see "compute_scale_and_shift" in this gist: https://gist.github.com/ranftlr/a1c7a24ebb24ce0e2f2ace5bce917022). However, the challenge of course is from where to get these real depth measurement, as they need to be available for every frame.

ranftlr avatar Jul 02 '21 08:07 ranftlr

Thank you very much for your reply. At present, I mainly want to deploy your algorithm on the UAV to conduct the obstacle avoidance experiment. But as you said, it is difficult to get a unified scale in an unknown situation. In addition, it is found in the experiment that the depth estimation error of the algorithm is relatively large for the objects with bright colors (such as orange), and it is not known whether it is related to the training dataset.

jing98tian avatar Jul 02 '21 13:07 jing98tian

Currently we don't have an analysis along these lines. I'd be interested to see these patterns as it might help us to improve the models. Could you post some example input images where you observe these failures?

ranftlr avatar Jul 05 '21 09:07 ranftlr