Medical-Transformer icon indicating copy to clipboard operation
Medical-Transformer copied to clipboard

This net need fixed size image as input?

Open binzhangbin opened this issue 2 years ago • 15 comments

Traceback (most recent call last): File "train.py", line 140, in output = model(X_batch) File "/home/zzp/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/home/zzp/2t/binzhang/Medical-Transformer/lib/models/axialnet.py", line 507, in forward return self._forward_impl(x) File "/home/zzp/2t/binzhang/Medical-Transformer/lib/models/axialnet.py", line 485, in _forward_impl x1 = self.layer1(x) File "/home/zzp/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/home/zzp/.local/lib/python3.8/site-packages/torch/nn/modules/container.py", line 139, in forward input = module(input) File "/home/zzp/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/home/zzp/2t/binzhang/Medical-Transformer/lib/models/axialnet.py", line 331, in forward out = self.hight_block(out) File "/home/zzp/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/home/zzp/2t/binzhang/Medical-Transformer/lib/models/axialnet.py", line 157, in forward qr = torch.einsum('bgci,cij->bgij', q, q_embedding) File "/home/zzp/.local/lib/python3.8/site-packages/torch/functional.py", line 299, in einsum return _VF.einsum(equation, operands) # type: ignore[attr-defined] RuntimeError: einsum(): operands do not broadcast with remapped shapes [original->remapped]: [2000, 8, 1, 500]->[2000, 8, 500, 1, 1] [1, 64, 64]->[1, 1, 64, 64, 1]

binzhangbin avatar Aug 25 '21 01:08 binzhangbin

你的问题解决了嘛。我也遇到了同样的问题。

wuyangzz avatar Aug 26 '21 03:08 wuyangzz

你的问题解决了嘛。我也遇到了同样的问题。

i got the same, how's going

Hypothesisl avatar Sep 06 '21 11:09 Hypothesisl

i got the same, how's going

huanqingxu avatar Sep 09 '21 02:09 huanqingxu

qr = np.einsum('bgci,cij->bgij', q, q_embedding) File "<array_function internals>", line 6, in einsum File "/home/lab549/anaconda3/envs/medt/lib/python3.6/site-packages/numpy/core/einsumfunc.py", line 1356, in einsum return c_einsum(*operands, **kwargs) ValueError: operands could not be broadcast together with remapped shapes [original->remapped]: (500,8,1,500)->(500,8,500,newaxis,1) (1,64,64)->(64,64,1)

huanqingxu avatar Sep 09 '21 02:09 huanqingxu

1000, 1000, 3) (256, 256, 1) torch.Size([500, 8, 1, 500]) torch.Size([1, 64, 64]) torch.Size([500, 8, 1, 500]) Traceback (most recent call last): File "/media/lab549/Data/Medical-Transformer-main/train.py", line 140, in output = model(X_batch) File "/home/lab549/anaconda3/envs/medt/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call result = self.forward(*input, **kwargs) File "/home/lab549/anaconda3/envs/medt/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 152, in forward outputs = self.parallel_apply(replicas, inputs, kwargs) File "/home/lab549/anaconda3/envs/medt/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 162, in parallel_apply return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)]) File "/home/lab549/anaconda3/envs/medt/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 85, in parallel_apply output.reraise() File "/home/lab549/anaconda3/envs/medt/lib/python3.6/site-packages/torch/_utils.py", line 394, in reraise raise self.exc_type(msg) RuntimeError: Caught RuntimeError in replica 0 on device 0. Original Traceback (most recent call last): File "/home/lab549/anaconda3/envs/medt/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 60, in _worker output = module(*input, **kwargs) File "/home/lab549/anaconda3/envs/medt/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call result = self.forward(*input, **kwargs) File "/media/lab549/Data/Medical-Transformer-main/lib/models/axialnet.py", line 717, in forward return self._forward_impl(x) File "/media/lab549/Data/Medical-Transformer-main/lib/models/axialnet.py", line 642, in _forward_impl x1 = self.layer1(x) File "/home/lab549/anaconda3/envs/medt/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call result = self.forward(*input, **kwargs) File "/home/lab549/anaconda3/envs/medt/lib/python3.6/site-packages/torch/nn/modules/container.py", line 100, in forward input = module(input) File "/home/lab549/anaconda3/envs/medt/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call result = self.forward(*input, **kwargs) File "/media/lab549/Data/Medical-Transformer-main/lib/models/axialnet.py", line 337, in forward out = self.hight_block(out) File "/home/lab549/anaconda3/envs/medt/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call result = self.forward(*input, **kwargs) File "/media/lab549/Data/Medical-Transformer-main/lib/models/axialnet.py", line 161, in forward qr = torch.einsum('bgci,cij->bgij', q, q_embedding) File "/home/lab549/anaconda3/envs/medt/lib/python3.6/site-packages/torch/functional.py", line 241, in einsum return torch._C._VariableFunctions.einsum(equation, operands) RuntimeError: size of dimension does not match previous size, operand 1, dim 1

huanqingxu avatar Sep 09 '21 02:09 huanqingxu

兄弟们,你们的解决了吗,我也是一样的问题

Agigina avatar Sep 14 '21 09:09 Agigina

Yes, with the current code setup, this network needs fixed image size for all images in dataset as input.

jeya-maria-jose avatar Sep 15 '21 02:09 jeya-maria-jose

File "train.py", line 140, in output = model(X_batch) File "D:\anaconda1\envs\Medical-transformer\lib\site-packages\torch\nn\modules\module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "E:\Medical-Transformer-main\lib\models\axialnet.py", line 507, in forward return self._forward_impl(x) File "E:\Medical-Transformer-main\lib\models\axialnet.py", line 485, in _forward_impl x1 = self.layer1(x) File "D:\anaconda1\envs\Medical-transformer\lib\site-packages\torch\nn\modules\module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "D:\anaconda1\envs\Medical-transformer\lib\site-packages\torch\nn\modules\container.py", line 139, in forward input = module(input) File "D:\anaconda1\envs\Medical-transformer\lib\site-packages\torch\nn\modules\module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "E:\Medical-Transformer-main\lib\models\axialnet.py", line 331, in forward out = self.hight_block(out) File "D:\anaconda1\envs\Medical-transformer\lib\site-packages\torch\nn\modules\module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "E:\Medical-Transformer-main\lib\models\axialnet.py", line 157, in forward qr = torch.einsum('bgci,cij->bgij', q, q_embedding) File "D:\anaconda1\envs\Medical-transformer\lib\site-packages\torch\functional.py", line 299, in einsum return _VF.einsum(equation, operands) # type: ignore[attr-defined] RuntimeError: einsum(): operands do not broadcast with remapped shapes [original->remapped]: [2560, 8, 1, 360]->[2560, 8, 360, 1, 1] [1, 64, 64]->[1, 1, 64, 64, 1]

请问你们解决了吗?

123xu223 avatar Sep 15 '21 04:09 123xu223

Yes, with the current code setup, this network needs fixed image size for all images in dataset as input.

I came across a new error report, You said before that the size of the input image is different, but my image size is the same this time, but I still report this error. 

Total_params: 1347266 Traceback (most recent call last): File "train.py", line 140, in output = model(X_batch) File "D:\anaconda1\envs\Medical-transformer\lib\site-packages\torch\nn\modules\module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "E:\Medical-Transformer-main\lib\models\axialnet.py", line 507, in forward return self._forward_impl(x) File "E:\Medical-Transformer-main\lib\models\axialnet.py", line 485, in _forward_impl x1 = self.layer1(x) File "D:\anaconda1\envs\Medical-transformer\lib\site-packages\torch\nn\modules\module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "D:\anaconda1\envs\Medical-transformer\lib\site-packages\torch\nn\modules\container.py", line 139, in forward input = module(input) File "D:\anaconda1\envs\Medical-transformer\lib\site-packages\torch\nn\modules\module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "E:\Medical-Transformer-main\lib\models\axialnet.py", line 331, in forward out = self.hight_block(out) File "D:\anaconda1\envs\Medical-transformer\lib\site-packages\torch\nn\modules\module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "E:\Medical-Transformer-main\lib\models\axialnet.py", line 157, in forward qr = torch.einsum('bgci,cij->bgij', q, q_embedding) File "D:\anaconda1\envs\Medical-transformer\lib\site-packages\torch\functional.py", line 299, in einsum return _VF.einsum(equation, operands) # type: ignore[attr-defined] RuntimeError: einsum(): operands do not broadcast with remapped shapes [original->remapped]: [2560, 8, 1, 360]->[2560, 8, 360, 1, 1] [1, 180, 180]->[1, 1, 180, 180, 1]

123xu223 avatar Sep 15 '21 05:09 123xu223

Yes, with the current code setup, this network needs fixed image size for all images in dataset as input.

I came across a new error report, You said before that the size of the input image is different, but my image size is the same this time, but I still report this error.  my picture is 1280*720 pixel

123xu223 avatar Sep 15 '21 05:09 123xu223

需要设置命令行参数--img_size,要和图片大小一样

wikiy223 avatar Oct 04 '21 07:10 wikiy223

model = lib.models.axialnet.MedT(img_size = imgsize, imgchan = imgchant)

wikiy223 avatar Oct 04 '21 07:10 wikiy223

1000, 1000, 3) (256, 256, 1) torch.Size([500, 8, 1, 500]) torch.Size([1, 64, 64]) torch.Size([500, 8, 1, 500]) Traceback (most recent call last): File "/media/lab549/Data/Medical-Transformer-main/train.py", line 140, in output = model(X_batch) File "/home/lab549/anaconda3/envs/medt/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call result = self.forward(*input, **kwargs) File "/home/lab549/anaconda3/envs/medt/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 152, in forward outputs = self.parallel_apply(replicas, inputs, kwargs) File "/home/lab549/anaconda3/envs/medt/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 162, in parallel_apply return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)]) File "/home/lab549/anaconda3/envs/medt/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 85, in parallel_apply output.reraise() File "/home/lab549/anaconda3/envs/medt/lib/python3.6/site-packages/torch/_utils.py", line 394, in reraise raise self.exc_type(msg) RuntimeError: Caught RuntimeError in replica 0 on device 0. Original Traceback (most recent call last): File "/home/lab549/anaconda3/envs/medt/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 60, in _worker output = module(*input, **kwargs) File "/home/lab549/anaconda3/envs/medt/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call result = self.forward(*input, **kwargs) File "/media/lab549/Data/Medical-Transformer-main/lib/models/axialnet.py", line 717, in forward return self._forward_impl(x) File "/media/lab549/Data/Medical-Transformer-main/lib/models/axialnet.py", line 642, in _forward_impl x1 = self.layer1(x) File "/home/lab549/anaconda3/envs/medt/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call result = self.forward(*input, **kwargs) File "/home/lab549/anaconda3/envs/medt/lib/python3.6/site-packages/torch/nn/modules/container.py", line 100, in forward input = module(input) File "/home/lab549/anaconda3/envs/medt/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call result = self.forward(*input, **kwargs) File "/media/lab549/Data/Medical-Transformer-main/lib/models/axialnet.py", line 337, in forward out = self.hight_block(out) File "/home/lab549/anaconda3/envs/medt/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call result = self.forward(*input, **kwargs) File "/media/lab549/Data/Medical-Transformer-main/lib/models/axialnet.py", line 161, in forward qr = torch.einsum('bgci,cij->bgij', q, q_embedding) File "/home/lab549/anaconda3/envs/medt/lib/python3.6/site-packages/torch/functional.py", line 241, in einsum return torch._C._VariableFunctions.einsum(equation, operands) RuntimeError: size of dimension does not match previous size, operand 1, dim 1

我也是这个错误,爱因斯坦求和函数错误

wikiy223 avatar Oct 04 '21 07:10 wikiy223

回溯(最后一次调用): 文件“train.py”,第 140 行,在 输出 = 模型(X_batch) 文件“/home/zzp/.local/lib/python3.8/site-packages/torch/nn/modules /module.py”,第 1051 行,在 _call_impl 返回 forward_call(*input, **kwargs) 文件“/home/zzp/2t/binzhang/Medical-Transformer/lib/models/axisnet.py”,第 507 行,向前 返回 self._forward_impl(x) 文件 "/home/zzp/2t/binzhang/Medical-Transformer/lib/models/axisnet.py",第 485 行,在 _forward_impl x1 = self.layer1(x) 文件 "/home/zzp /.local/lib/python3.8/site-packages/torch/nn/modules/module.py”,第 1051 行,在 _call_impl 中 返回 forward_call(*input, **kwargs) 文件“/home/zzp/.local/lib/python3.8/site-packages/torch/nn/modules/container.py”,第 139 行,前向 输入 = 模块(输入) 文件“/home/zzp/. local/lib/python3.8/site-packages/torch/nn/modules/module.py”,第 1051 行,在 _call_impl 返回 forward_call(*input, **kwargs) 文件“/home/zzp/2t/binzhang/Medical -Transformer/lib/models/axisnet.py”,第 331 行,in forward out = self.hight_block(out) File “/home/zzp/.local/lib/python3.8/site-packages/torch/nn/modules /module.py”,第 1051 行,在 _call_impl 中 返回 forward_call(*input, **kwargs) 文件“/home/zzp/2t/binzhang/Medical-Transformer/lib/models/axisnet.py”,第 157 行,向前 qr = torch.einsum('bgci,cij->bgij', q, q_embedding) 文件“/home/zzp/.local/lib/python3.8/site-packages/torch/functional.py”,第 299 行,在 einsum return _VF.einsum(equation, operands) # type: ignore[attr-defined] RuntimeError: einsum(): 操作数不使用重新映射的形状进行广播 [original->remapped]: [2000, 8, 1, 500]->[2000, 8, 500, 1, 1] [1, 64, 64]- >[1, 1, 64, 64, 1]

这个谁解决了吗 我也出现这个问题了

1999zjq avatar May 23 '22 08:05 1999zjq

谢谢,已经解决

hczyni avatar Aug 22 '23 01:08 hczyni