Traceback (most recent call last):
File "/home/leo/Desktop/small_object/DRENet/detect.py", line 176, in
detect()
File "/home/leo/Desktop/small_object/DRENet/detect.py", line 73, in detect
pred = model(img, augment=opt.augment)[0][0]
File "/home/leo/anaconda3/envs/revolution/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/leo/anaconda3/envs/revolution/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "/home/leo/Desktop/small_object/DRENet/models/yolo.py", line 131, in forward
return self.forward_once(x, profile) # single-scale inference, train
File "/home/leo/Desktop/small_object/DRENet/models/yolo.py", line 148, in forward_once
x = m(x) # run
File "/home/leo/anaconda3/envs/revolution/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/leo/anaconda3/envs/revolution/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "/home/leo/Desktop/small_object/DRENet/models/common.py", line 208, in forward
return self.cv3(torch.cat((self.m(self.cv1(x)), self.cv2(x)), dim=1))
File "/home/leo/anaconda3/envs/revolution/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/leo/anaconda3/envs/revolution/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "/home/leo/anaconda3/envs/revolution/lib/python3.10/site-packages/torch/nn/modules/container.py", line 217, in forward
input = module(input)
File "/home/leo/anaconda3/envs/revolution/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/leo/anaconda3/envs/revolution/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "/home/leo/Desktop/small_object/DRENet/models/common.py", line 153, in forward
energy = content_content + content_position
RuntimeError: The size of tensor a (160) must match the size of tensor b (256) at non-singleton dimension 1
Pls help me!
Hi @vtise-github,
It appears that you're encountering a similar issue to #4 regarding input resolution. You can find potential solutions by referencing that issue. If you're using detect.py and our pretrained checkpoint, you may need to ensure the input size is 512. Alternatively, if you're interested in training the network for different resolution images, you can explore the solutions in #4, or try our newly supported adaptive resolution input feature as mentioned in the README:
