EDSR-PyTorch icon indicating copy to clipboard operation
EDSR-PyTorch copied to clipboard

when I train RCAN,something wrong:RuntimeError: Expected 4-dimensional input for 4-dimensional weight 3 3 1, but got 3-dimensional input of size [1, 184, 270] instead

Open countingstarsmer opened this issue 5 years ago • 8 comments

python main.py --template RCAN --save RCAN_BIX2_G10R20P48 --scale 2 --reset --save_results --patch_size 96 and then

Traceback (most recent call last): File "main.py", line 33, in main() File "main.py", line 28, in main t.test() File "/home/zhj/EDSR-1.1.0/src/trainer.py", line 89, in test sr = self.model(lr, idx_scale) File "/home/zhj/anaconda3/envs/pytorch1.1/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in call result = self.forward(*input, **kwargs) File "/home/zhj/EDSR-1.1.0/src/model/init.py", line 57, in forward return forward_function(x) File "/home/zhj/EDSR-1.1.0/src/model/init.py", line 135, in forward_chop y = self.forward_chop(*p, shave=shave, min_size=min_size) File "/home/zhj/EDSR-1.1.0/src/model/init.py", line 126, in forward_chop y = P.data_parallel(self.model, *x, range(n_GPUs)) File "/home/zhj/anaconda3/envs/pytorch1.1/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 204, in data_parallel return module(*inputs[0], **module_kwargs[0]) File "/home/zhj/anaconda3/envs/pytorch1.1/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in call result = self.forward(*input, **kwargs) File "/home/zhj/EDSR-1.1.0/src/model/rcan.py", line 107, in forward x = self.sub_mean(x) File "/home/zhj/anaconda3/envs/pytorch1.1/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in call result = self.forward(*input, **kwargs) File "/home/zhj/anaconda3/envs/pytorch1.1/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 338, in forward self.padding, self.dilation, self.groups) RuntimeError: Expected 4-dimensional input for 4-dimensional weight 3 3 1, but got 3-dimensional input of size [1, 184, 270] instead

countingstarsmer avatar Jul 12 '20 03:07 countingstarsmer

Same error on my machine

blueardour avatar Jul 16 '20 07:07 blueardour

+1

Raymondmax avatar Jul 19 '20 05:07 Raymondmax

how to solve it!!!help me

flymmmfly avatar Jul 20 '20 11:07 flymmmfly

Same error on my machine

RuyuXu2019 avatar Jul 25 '20 01:07 RuyuXu2019

You could refer to #184 .

QiangLi1997 avatar Aug 16 '20 11:08 QiangLi1997

in my opinion,the error mentioned above is caused by the mismatch of the number of dimension. Specifically,the required dimension is 4 whilst 3 was given,thus you should debug all the lines remain in the error message step by step in order to monitor the change of variables‘ dimension during the training process.

Hunter-Murphy avatar Aug 19 '20 13:08 Hunter-Murphy

I was successful with the fix from #184 referenced above. In particular, model/__init__.py line 133 onwards became:

        else:
            for p in zip(*x_chops):
                p = [p_.unsqueeze(0) for p_ in p]
                y = self.forward_chop(*p, shave=shave, min_size=min_size)

I really don't have a good explanation but it seems to work.


I guess the gist of it is that x_chops contains a tensor for each input (args is List[Tensor(B x C x H x W)]) to forward_chop. That tensor is cut up into quarters and catted along the batch dimension. So now you have something along the lines of x_chops is List[Tensor(B*4 x C x H/4 x W/4)]. Then the "clever" line

for p in zip(*x_chops):

is equivalent to something like

for i in range(B*4):
    p = [x_ch[i, ...] for x_ch in x_chops]

which, as you can see when it's not so "clever", is going to drop the first dimension on each element in x_chops. Which is a problem because p is the recursive input to forward_chop. :(

flauted avatar Sep 30 '20 04:09 flauted

I solved this problem by omitting "--chop" in option.(python==36, pytorch==1.1)

wkiulu avatar Dec 10 '20 15:12 wkiulu