kishore-greddy

Results 13 comments of kishore-greddy

Hey @mattpoggi , Thanks for the quick reply. I will try this out.

Hi @mattpoggi , Forgot to ask, Have you also tried the other method? Meaning, keeping the uncertainty values greater than 0 in the decoder and actually modelling for the uncertainty...

Hey @mattpoggi , I tried to model the log-uncertainty as you suggested, without binding the uncertainty to any range. I have exploding gradients problem. I have updated my loss function...

Hi @mattpoggi , I observed that this occurs almost at every training of log model. I have tried it 3 times now, and every time I have this problem. Sometimes...

Do you mean scaling of the uncertainty to full resolution before calculating the loss? Yes, I have done that. ![image](https://user-images.githubusercontent.com/76811772/105062794-4af54f80-5a7b-11eb-8537-bd4c5f8e9ca2.png) If you mean upsampling of the uncertainties in the decoder,...

Thanks :) Would be waiting for yor inputs

Okay..Let me know how it goes..

@jlia904 Even after you corrected the code snippet with the torch.cuda.synchronize(), your inference speed settles around 120ms, which is 10 times slower albeit at a higher resolution. Did you try...

@LiheYoung As reported by @jlia904 , I also tried inferring on 512x512 image resolution on the tesla v100-dgxs-32gb, and my inference time was around 130ms which is nowhere close to...

@jlia904 Thanks for the reply. Do you know the possible reason for it? Or do you think that the reported numbers are wrong?