REDNet-pytorch
REDNet-pytorch copied to clipboard
upsampling issue
Hi @yjn870, thank you for the implementation.
It works reasonably well for the most cases.
However, for the images that has some even dimensions such as [370, 545], [663, 962], [359, 478],
the forward pass does not preserve the dimensions at their original form. As a result, I receive the following error :
...
in forward
x += residual
RuntimeError: The expanded size of the tensor (360)
must match the existing size (359) at non-singleton dimension 2
torch.Size([1, 3, 360, 478]) torch.Size([1, 3, 359, 478])
Can you suggest a solution to this problem?
@BedirYilmaz Have you solved this issue?
@srinivas-alva-gc the only solution that I can think of is patch-based training. You divide your training set to constant-sized patches with sizes that match the needs of the model (so that it won't create the condition above) and perform minibatch training. This would work for training the model.
Testing is another issue though. If you require the exactly same dimensions as the input in testing, then you have a deeper problem. I suspect you do, since this is an image quality task, you will end up comparing the images at pixel level so, you need the same dimensions. But maybe you can find a way to divide the test samples to patches as well.
Nah. That's too much of a hassle. I am pretty sure there is something wrong with the implementation of the architecture.