EDSR-PyTorch
EDSR-PyTorch copied to clipboard
Bug in testing Set5 X3 with `args.chop=True`
When I set args.chop=False
, everything goes well. When I set it to True, it outputs errors as:
Evaluation:################################################ 2019-10-10-16:38:03
40%|██████████████████ | 2/5 [00:02<00:04, 1.36s/it]
Traceback (most recent call last):
File "/anaconda3/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 3325, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-2-8439669b053c>", line 1, in <module>
runfile('/Code/EDSR-PyTorch-master/src/main.py', wdir='/Code/EDSR-PyTorch-master/src')
File "/.pycharm_helpers/pydev/_pydev_bundle/pydev_umd.py", line 197, in runfile
pydev_imports.execfile(filename, global_vars, local_vars) # execute the script
File "/.pycharm_helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/Code/EDSR-PyTorch-master/src/main.py", line 41, in <module>
main()
File "/Code/EDSR-PyTorch-master/src/main.py", line 32, in main
while not t.terminate():
File "/Code/EDSR-PyTorch-master/src/trainer.py", line 141, in terminate
self.test()
File "/Code/EDSR-PyTorch-master/src/trainer.py", line 91, in test
sr = self.model(lr, idx_scale)
File "/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in __call__
result = self.forward(*input, **kwargs)
File "/Code/EDSR-PyTorch-master/src/model/__init__.py", line 62, in forward
return forward_function(x)
File "/Code/EDSR-PyTorch-master/src/model/__init__.py", line 175, in forward_chop
_y[..., top, right] = y_chop[1][..., top, right_r]
RuntimeError: The expanded size of the tensor (127) must match the existing size (128) at non-singleton dimension 3. Target sizes: [1, 3, 127, 127]. Tensor sizes: [3, 127, 128]
Note that I have no problem with training. In testing, for X2 and X4, args.chop=True
works good. For X3, some images in Set5 and Set14 would have problems. By the way, I download the processed datasets directly from your link in Readme.
Hello, I meet the same problem. Have you solved it?
Hi,@xiaoj45 @JingyunLiang I have meet the same problem, have you solved this problem?
Edit EDSR-PyTorch-master/src/model/__init__.py
as follow:
def forward_chop(self, *args, shave=10, min_size=160000):
scale = 1 if self.input_large else self.scale[self.idx_scale]
n_GPUs = min(self.n_GPUs, 4)
# height, width
h, w = args[0].size()[-2:]
h_half, w_half = h//2, w//2
h_size, w_size = h_half + shave, w_half + shave
top = slice(0, h_size)
bottom = slice(h - h_size, h)
left = slice(0, w_size)
right = slice(w - w_size, w)
x_chops = [torch.cat([
a[..., top, left],
a[..., top, right],
a[..., bottom, left],
a[..., bottom, right]
]) for a in args]
y_chops = []
if h * w < 4 * min_size:
for i in range(0, 4, n_GPUs):
x = [x_chop[i:(i + n_GPUs)] for x_chop in x_chops]
y = P.data_parallel(self.model, *x, range(n_GPUs))
if not isinstance(y, list): y = [y]
if not y_chops:
y_chops = [[c for c in _y.chunk(n_GPUs, dim=0)] for _y in y]
else:
for y_chop, _y in zip(y_chops, y):
y_chop.extend(_y.chunk(n_GPUs, dim=0))
else:
for p in zip(*x_chops):
p1 = [p[0].unsqueeze(0)]
y = self.forward_chop(*p1, shave=shave, min_size=min_size)
if not isinstance(y, list): y = [y]
if not y_chops:
y_chops = [[_y] for _y in y]
else:
for y_chop, _y in zip(y_chops, y): y_chop.append(_y)
h, w = scale * h, scale * w
h_half, w_half = scale * h_half, scale * w_half
h_size, w_size = scale * h_size, scale * w_size
shave *= scale
# h *= scale
# w *= scale
# top = slice(0, h_half)
# bottom = slice(h - h_half, h)
# bottom_r = slice(h//2 - h, None)
# left = slice(0, w_half)
# right = slice(w - w_half, w)
# right_r = slice(w//2 - w, None)
# batch size, number of color channels
b, c = y_chops[0][0].size()[:-2]
y = [y_chop[0].new(b, c, h, w) for y_chop in y_chops]
for y_chop, _y in zip(y_chops, y):
_y[..., 0:h_half, 0:w_half] = y_chop[0][..., 0:h_half, 0:w_half]
_y[..., 0:h_half, w_half:w] = y_chop[1][..., 0:h_half, (w_size - w + w_half):w_size]
_y[..., h_half:h, 0:w_half] = y_chop[2][..., (h_size - h + h_half):h_size, 0:w_half]
_y[..., h_half:h, w_half:w] = y_chop[3][..., (h_size - h + h_half):h_size, (w_size - w + w_half):w_size]
if len(y) == 1: y = y[0]
return y
@HolmesShuan, Thanks for your reply!
I have solved this issue by replacing forward_chop
in EDSR with forward_chop
in RCAN
(PyTorch0.4.0). My model shows this change doesn't affect my accuracy.
But, I still thank you very much!
@1187697147 Cheers~
@HolmesShuan, Thanks for your reply! I have solved this issue by replacing
forward_chop
in EDSR withforward_chop
inRCAN
(PyTorch0.4.0). My model shows this change doesn't affect my accuracy. But, I still thank you very much!
hi, i have the same problem and this advice desen't work for me. Although a lot of time has passed, I would appreciate it if you could guide me and give me more explanations on how to solve this problem. thanks @Senwang98
Edit
EDSR-PyTorch-master/src/model/__init__.py
as follow:def forward_chop(self, *args, shave=10, min_size=160000): scale = 1 if self.input_large else self.scale[self.idx_scale] n_GPUs = min(self.n_GPUs, 4) # height, width h, w = args[0].size()[-2:] h_half, w_half = h//2, w//2 h_size, w_size = h_half + shave, w_half + shave top = slice(0, h_size) bottom = slice(h - h_size, h) left = slice(0, w_size) right = slice(w - w_size, w) x_chops = [torch.cat([ a[..., top, left], a[..., top, right], a[..., bottom, left], a[..., bottom, right] ]) for a in args] y_chops = [] if h * w < 4 * min_size: for i in range(0, 4, n_GPUs): x = [x_chop[i:(i + n_GPUs)] for x_chop in x_chops] y = P.data_parallel(self.model, *x, range(n_GPUs)) if not isinstance(y, list): y = [y] if not y_chops: y_chops = [[c for c in _y.chunk(n_GPUs, dim=0)] for _y in y] else: for y_chop, _y in zip(y_chops, y): y_chop.extend(_y.chunk(n_GPUs, dim=0)) else: for p in zip(*x_chops): p1 = [p[0].unsqueeze(0)] y = self.forward_chop(*p1, shave=shave, min_size=min_size) if not isinstance(y, list): y = [y] if not y_chops: y_chops = [[_y] for _y in y] else: for y_chop, _y in zip(y_chops, y): y_chop.append(_y) h, w = scale * h, scale * w h_half, w_half = scale * h_half, scale * w_half h_size, w_size = scale * h_size, scale * w_size shave *= scale # h *= scale # w *= scale # top = slice(0, h_half) # bottom = slice(h - h_half, h) # bottom_r = slice(h//2 - h, None) # left = slice(0, w_half) # right = slice(w - w_half, w) # right_r = slice(w//2 - w, None) # batch size, number of color channels b, c = y_chops[0][0].size()[:-2] y = [y_chop[0].new(b, c, h, w) for y_chop in y_chops] for y_chop, _y in zip(y_chops, y): _y[..., 0:h_half, 0:w_half] = y_chop[0][..., 0:h_half, 0:w_half] _y[..., 0:h_half, w_half:w] = y_chop[1][..., 0:h_half, (w_size - w + w_half):w_size] _y[..., h_half:h, 0:w_half] = y_chop[2][..., (h_size - h + h_half):h_size, 0:w_half] _y[..., h_half:h, w_half:w] = y_chop[3][..., (h_size - h + h_half):h_size, (w_size - w + w_half):w_size] if len(y) == 1: y = y[0] return y
Hello. I want to give some PNG images as input to EDSR model, at first I was faced with the error below: RuntimeError: Given groups=1, weight of size [3, 3, 1, 1], expected input[1, 4, 268, 300] to have 3 channels, but got 4 channels instead. and it was solved with the help of guide https://github.com/sanghyun-son/EDSR-PyTorch/issues/166#issuecomment-490664442 after that I faced another error as below: RuntimeError: The size of tensor a (1070) must match the size of tensor b (698) at non-singleton dimension 3. and applied your guide https://github.com/sanghyun-son/EDSR-PyTorch/issues/223#issuecomment-773091738.
I used your advice and replaced the code that put in this page with def forward_chop
in the EDSR-PyTorch-master/src/model/init.py
file. But I encountered a new error as below and i don't know how to fix it:
IndentationError: unindent does not match any outer indentation level
Although a lot of time has passed,I would be very grateful if you could please tell me how I can solve this problem or guide me if I did something wrong.
@HolmesShuan
full error :
Traceback (most recent call last): File "main.py", line 5, in import model File "/content/EDSR-PyTorch/src/model/init.py", line 174 def forward_x8(self, *args, forward_function=None): ^ IndentationError: unindent does not match any outer indentation level
rezraz1 The reason why this raises is one image should have 3 channel but png was plus a alpha channel. RuntimeError: Given groups=1, weight of size [3, 3, 1, 1], expected input[1, 4, 268, 300] to have 3 channels, but got 4 channels instead. So after change the lr to 3 channel, the hr should changed too. Then all problem will solve, no need to do other thing
@rezraz1