DRENet icon indicating copy to clipboard operation
DRENet copied to clipboard

I cannot run DRENet. Because has a bug: RuntimeError: The size of tensor a (160) must match the size of tensor b (256) at non-singleton dimension 1

Open vtise-github opened this issue 1 year ago • 1 comments

Traceback (most recent call last): File "/home/leo/Desktop/small_object/DRENet/detect.py", line 176, in detect() File "/home/leo/Desktop/small_object/DRENet/detect.py", line 73, in detect pred = model(img, augment=opt.augment)[0][0] File "/home/leo/anaconda3/envs/revolution/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/leo/anaconda3/envs/revolution/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl return forward_call(*args, **kwargs) File "/home/leo/Desktop/small_object/DRENet/models/yolo.py", line 131, in forward return self.forward_once(x, profile) # single-scale inference, train File "/home/leo/Desktop/small_object/DRENet/models/yolo.py", line 148, in forward_once x = m(x) # run File "/home/leo/anaconda3/envs/revolution/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/leo/anaconda3/envs/revolution/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl return forward_call(*args, **kwargs) File "/home/leo/Desktop/small_object/DRENet/models/common.py", line 208, in forward return self.cv3(torch.cat((self.m(self.cv1(x)), self.cv2(x)), dim=1)) File "/home/leo/anaconda3/envs/revolution/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/leo/anaconda3/envs/revolution/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl return forward_call(*args, **kwargs) File "/home/leo/anaconda3/envs/revolution/lib/python3.10/site-packages/torch/nn/modules/container.py", line 217, in forward input = module(input) File "/home/leo/anaconda3/envs/revolution/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/leo/anaconda3/envs/revolution/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl return forward_call(*args, **kwargs) File "/home/leo/Desktop/small_object/DRENet/models/common.py", line 153, in forward energy = content_content + content_position RuntimeError: The size of tensor a (160) must match the size of tensor b (256) at non-singleton dimension 1

Pls help me!

vtise-github avatar Apr 25 '24 08:04 vtise-github

Hi @vtise-github,

It appears that you're encountering a similar issue to #4 regarding input resolution. You can find potential solutions by referencing that issue. If you're using detect.py and our pretrained checkpoint, you may need to ensure the input size is 512. Alternatively, if you're interested in training the network for different resolution images, you can explore the solutions in #4, or try our newly supported adaptive resolution input feature as mentioned in the README:

image

WindVChen avatar Apr 25 '24 12:04 WindVChen