monodepth icon indicating copy to clipboard operation
monodepth copied to clipboard

Potential bug in get_disparity_smoothness

Open yzcjtr opened this issue 8 years ago • 12 comments

Hi,

I think I've found a bug in the function get_disparity_smoothness in monodepth_model.py. The last line concats two lists instead of adding them elementwise. The actual return of the function is a list of 8 tensors of shape [B,H,W,1]. Thus the use of the function may be inappropriate.

yzcjtr avatar Sep 07 '17 02:09 yzcjtr

Looks like you're right! It ends up only adding the horizontal smoothness to the loss. Thanks for spotting this. I'll think of a way to fix this.

mrharicot avatar Sep 07 '17 03:09 mrharicot

Hey! even I noticed this bug. But when I try to add them up in a single line of list comprehension, I get an error stating the sizes for gradients are different. This is somewhat expected since the gradient_x is of size (H, W-1) while the gradient_y is of size (H-1, W) as a consequence of your function to get the X and Y image gradients.

Even on fixing that (using some crude hack basically interpolating for excluded border row and column), while the adding the weighted X and Y gradients in the list comprehension, I was getting another error stating the batch sizes are different (8 and 7 resp.) for gradients_x[i] and gradients_y[i].

For now, I am just adding them whenever batch sizes match and where they don't match I just give the weighted value for the X gradients. Although this is only to get the code actually working (assuming this does not happen with every iteration.)

kninad avatar Dec 12 '17 08:12 kninad

Does this issue still exist? Any solution code suggested? Thanks in advance

jahaniam avatar May 12 '18 05:05 jahaniam

Yeah, this issue is not yet solved. @mrharicot if you could give some insight on this.

shubhamwagh avatar May 20 '18 16:05 shubhamwagh

@a-jahani @shubhamwagh the simplest way to fix it is to change range(4) to range(8) in https://github.com/mrharicot/monodepth/blob/master/monodepth_model.py#L355 and https://github.com/mrharicot/monodepth/blob/master/monodepth_model.py#L356 from my experience, it shouldn't really impact the results

I will not update this code anymore as I mostly keeping it around so that people can compare with the CVPR submission.

a new pytorch code with an improved method will be online in the next month or so in a separate repo

mrharicot avatar May 20 '18 17:05 mrharicot

when I try to add them up in a single line of list comprehension, I get an error stating the sizes for gradients are different. This is somewhat expected since the gradient_x is of size (H, W-1) while the gradient_y is of size (H-1, W) as a consequence of your function to get the X and Y image gradients.

Any solution for this ??

abhiagwl4262 avatar May 21 '18 09:05 abhiagwl4262

@mrharicot Why the switch to pytorch? I'm curious about switching as well.

ghost avatar May 21 '18 14:05 ghost

Because it is essentially python, it is very easy to use and you can iterate and try things extremely quickly. The dataloaders are also very nice as they are pure python, although I haven't looked at the latest tensorflow dataset api.

mrharicot avatar May 21 '18 15:05 mrharicot

@abhiagwl4262 because we are using tf.abs you can simply compute the mean separately for both or you can also crop each of them so that the sizes matches (H-1, W-1)

mrharicot avatar May 21 '18 15:05 mrharicot

@mrharicot I changed the following functions.

def gradient_x(self, img):
paddings = tf.constant([[0,0],[0,0],[1,0],[0,0]]) img = tf.pad(img, paddings,'CONSTANT') gx = img[:,:,:-1,:] - img[:,:,1:,:] return gx def gradient_y(self, img):
paddings = tf.constant([[0,0],[1,0],[0,0],[0,0]]) img = tf.pad(img, paddings,'CONSTANT') gy = img[:,:-1,:,:] - img[:,1:,:,:] return gy

abhiagwl4262 avatar May 23 '18 06:05 abhiagwl4262

@mrharicot

Will there be any issue if I use "tf.image.ssim" instead of self.SSIM ???

abhiagwl4262 avatar May 23 '18 06:05 abhiagwl4262

@a-jahani @shubhamwagh the simplest way to fix it is to change range(4) to range(8) in https://github.com/mrharicot/monodepth/blob/master/monodepth_model.py#L355 and https://github.com/mrharicot/monodepth/blob/master/monodepth_model.py#L356 from my experience, it shouldn't really impact the results

I will not update this code anymore as I mostly keeping it around so that people can compare with the CVPR submission.

a new pytorch code with an improved method will be online in the next month or so in a separate repo

Hi @mrharicot, is the new pytorch code that you mentioned going to be an implementation of the same monodepth model or the latest model from "Digging into Self-Supervised Monocular Depth Estimation"?

roboticsbala avatar Oct 15 '18 11:10 roboticsbala