DRENet icon indicating copy to clipboard operation
DRENet copied to clipboard

运行detect.py出错,您好我把C3ResAtnMHSA加入了yolov7中,出现下面的错误

Open denghuimin1 opened this issue 2 years ago • 2 comments

Namespace(weights=['/content/runs/train/exp2/weights/last.pt'], source='/content/1_6306.jpg', img_size=640, conf_thres=0.25, iou_thres=0.2, device='0', view_img=False, save_txt=False, save_conf=False, nosave=False, classes=None, agnostic_nms=False, augment=False, update=False, project='runs/detect', name='exp', exist_ok=False, no_trace=False) YOLOR 🚀 v0.1-122-g3b41c2c torch 1.9.1+cu111 CUDA:0 (Tesla T4, 15101.8125MB)

Fusing layers... RepConv.fuse_repvgg_block RepConv.fuse_repvgg_block RepConv.fuse_repvgg_block IDetect.fuse Model Summary: 327 layers, 36431002 parameters, 6194944 gradients Convert model to Traced-model... /content/yolov7/models/common.py:2172: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if C != self.in_channels: /content/yolov7/models/common.py:2131: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if C != self.in_channels: /content/yolov7/models/common.py:2139: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if x.size(0)>1: traced_script_module saved! model is traced!

Traceback (most recent call last): File "/content/yolov7/detect.py", line 196, in detect() File "/content/yolov7/detect.py", line 83, in detect model(img, augment=opt.augment)[0] File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/content/yolov7/utils/torch_utils.py", line 372, in forward out = self.model(x) File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) RuntimeError: The following operation failed in the TorchScript interpreter. Traceback of TorchScript (most recent call last): /content/yolov7/models/common.py(2052): forward /usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py(1039): _slow_forward /usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py(1051): _call_impl /usr/local/lib/python3.9/dist-packages/torch/nn/modules/container.py(139): forward /usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py(1039): _slow_forward /usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py(1051): _call_impl /content/yolov7/models/common.py(2072): forward /usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py(1039): _slow_forward /usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py(1051): _call_impl /content/yolov7/models/yolo.py(625): forward_once /content/yolov7/models/yolo.py(599): forward /usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py(1039): _slow_forward /usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py(1051): _call_impl /usr/local/lib/python3.9/dist-packages/torch/jit/_trace.py(952): trace_module /usr/local/lib/python3.9/dist-packages/torch/jit/_trace.py(735): trace /content/yolov7/utils/torch_utils.py(362): init /content/yolov7/detect.py(39): detect /content/yolov7/detect.py(196): RuntimeError: The size of tensor a (320) must match the size of tensor b (400) at non-singleton dimension 1

denghuimin1 avatar Apr 10 '23 03:04 denghuimin1

哈喽呀 @denghuimin1

从上面的报错信息来看,大概率是输入尺寸的问题。需要注意的是,如果图像尺寸!=512,需要对配置文件进行修改。具体可以参考下issue #4 和 #9 .

WindVChen avatar Apr 10 '23 04:04 WindVChen

谢谢您的回复呀,@WindVChen ,我改过之后发现还是出错,我再看看吧

denghuimin1 avatar Apr 11 '23 07:04 denghuimin1