monodepth2
monodepth2 copied to clipboard
why my disparity of image is vey high? for small far away object
i build model which can predict whether object is close,moderate or far away.. im getting good results, but in few cases its not performing well, can you please explain why the disparity of small (far away ) object is high?
my object detection model detecting person & car (they are very small)
object which is very far away having very less mean disparity, but for this image im getting high values
disp_resized = torch.nn.functional.interpolate(disp,
(original_height, original_width), mode="bilinear", align_corners=False)
mean_disparity=disp_resized
vmax = np.percentile(disp_resized_np, 95)
mean_disp_glo = disp_resized_np.mean()
std_disp_glo = disp_resized_np.std()
help is really appreciated @daniyar-niantic . Thanks
Hi, how did you get the absolute depth based on the relative depth? I'm currently working on converting a relative depth map to an absolute depth map. thanks!
Did you train this model on your training data? The KITTI model is far from perfect, so some error is expected. You also would probably get better predictions from stereo-trained model than video-only model.
i haven't trained any model, I just tried pretrained mono_640x192 model which is availble, my goal is to get the depth of each object which is presemnt in the image, so first I have tried yolo for object detection & pass bbox to monodepth to get disparityof object, based on that I have created the rule-based system, if the disparity is very less than mean & standard disparity my object is far away like this.
Do you mean that you pass crops of the input image to monodepth model? If so, then it is not likely to work well. Monodepth network tries to understand the depth of objects based on visual cues like pixel size of objects and vanishing points. If you crop individual objects from the image, all those cues are not available to the network. You should pass the whole image to network and then crop from estimated depthmap.
yes im doing same,still getting issues ,I'm passing whole image to monodepth ,& im getting disparity of the whole image, but my goal is to calculate depth of each object,so after that, i cropped my image using object detection. do you have any sample code,so i can understand better .thanks @daniyar-niantic
so after that, i cropped my image using object detection.
Are you cropping disparity map or the color image?
yes, can you review my code? I'm sharing google colab here, let me know what mistakes I have done @daniyar-niantic
yes, can you review my code? I'm sharing google colab here, let me know what mistakes I have done @daniyar-niantic
Have you figured out this problem? I am doing the same work, thank u!
@eugeneYz not yet
@eugeneYz not yet
Hello,
I found that Google Colab doesn't have python 3.6 anymore, and I get error when training on colab.
Can you share details about your environment (version of python, pytorch, torchvision, opencv, CUDA, using cudnn or not, ...) and the commands that you used to install some packets to train monodepth2 when using Google Colab?
Thank you.
Hi @akashAD98 ,
You should check if you are cropping a depth map or disparity map. The disparity is 1/depth
, so for far away objects the disparity value will be high.