monodepth
monodepth copied to clipboard
Potential bug in get_disparity_smoothness
Hi,
I think I've found a bug in the function get_disparity_smoothness in monodepth_model.py. The last line concats two lists instead of adding them elementwise. The actual return of the function is a list of 8 tensors of shape [B,H,W,1]. Thus the use of the function may be inappropriate.
Looks like you're right! It ends up only adding the horizontal smoothness to the loss. Thanks for spotting this. I'll think of a way to fix this.
Hey! even I noticed this bug. But when I try to add them up in a single line of list comprehension, I get an error stating the sizes for gradients are different. This is somewhat expected since the gradient_x is of size (H, W-1) while the gradient_y is of size (H-1, W) as a consequence of your function to get the X and Y image gradients.
Even on fixing that (using some crude hack basically interpolating for excluded border row and column), while the adding the weighted X and Y gradients in the list comprehension, I was getting another error stating the batch sizes are different (8 and 7 resp.) for gradients_x[i] and gradients_y[i].
For now, I am just adding them whenever batch sizes match and where they don't match I just give the weighted value for the X gradients. Although this is only to get the code actually working (assuming this does not happen with every iteration.)
Does this issue still exist? Any solution code suggested? Thanks in advance
Yeah, this issue is not yet solved. @mrharicot if you could give some insight on this.
@a-jahani @shubhamwagh the simplest way to fix it is to change range(4) to range(8) in https://github.com/mrharicot/monodepth/blob/master/monodepth_model.py#L355 and https://github.com/mrharicot/monodepth/blob/master/monodepth_model.py#L356
from my experience, it shouldn't really impact the results
I will not update this code anymore as I mostly keeping it around so that people can compare with the CVPR submission.
a new pytorch code with an improved method will be online in the next month or so in a separate repo
when I try to add them up in a single line of list comprehension, I get an error stating the sizes for gradients are different. This is somewhat expected since the gradient_x is of size (H, W-1) while the gradient_y is of size (H-1, W) as a consequence of your function to get the X and Y image gradients.
Any solution for this ??
@mrharicot Why the switch to pytorch? I'm curious about switching as well.
Because it is essentially python, it is very easy to use and you can iterate and try things extremely quickly. The dataloaders are also very nice as they are pure python, although I haven't looked at the latest tensorflow dataset api.
@abhiagwl4262 because we are using tf.abs you can simply compute the mean separately for both or you can also crop each of them so that the sizes matches (H-1, W-1)
@mrharicot I changed the following functions.
def gradient_x(self, img):
paddings = tf.constant([[0,0],[0,0],[1,0],[0,0]])
img = tf.pad(img, paddings,'CONSTANT')
gx = img[:,:,:-1,:] - img[:,:,1:,:]
return gx
def gradient_y(self, img):
paddings = tf.constant([[0,0],[1,0],[0,0],[0,0]])
img = tf.pad(img, paddings,'CONSTANT')
gy = img[:,:-1,:,:] - img[:,1:,:,:]
return gy
@mrharicot
Will there be any issue if I use "tf.image.ssim" instead of self.SSIM ???
@a-jahani @shubhamwagh the simplest way to fix it is to change
range(4)torange(8)in https://github.com/mrharicot/monodepth/blob/master/monodepth_model.py#L355 and https://github.com/mrharicot/monodepth/blob/master/monodepth_model.py#L356 from my experience, it shouldn't really impact the resultsI will not update this code anymore as I mostly keeping it around so that people can compare with the CVPR submission.
a new pytorch code with an improved method will be online in the next month or so in a separate repo
Hi @mrharicot, is the new pytorch code that you mentioned going to be an implementation of the same monodepth model or the latest model from "Digging into Self-Supervised Monocular Depth Estimation"?