yolov7
yolov7 copied to clipboard
export onnx: The output dimension cannot correspond to the input dimension
I converted the model through this command:
python export.py --weights best.pt --grid --end2end --simplify --topk-all 100 --iou-thres 0.65 --conf-thres 0.35 --img-size 640 640 --max-wh 640 --dynamic-batch
I get the file name 'best.onnx' but when I load best.onnx, the model output dimension cannot correspond to the input dimension
example: input shape = torch.Size([2, 3, 640, 640]) output shape = (12, 7)
the model predicted 12 boxes, but we don't know whether the boxes belongs to the first picture or the second picture
here is my code `input_batch = torch.cat([im1, im2], dim=0)
import onnxruntime as ort ort_session = ort.InferenceSession("best.onnx")
outputs = ort_session.run( None, {"images": input_batch.numpy()}, )`
@triple-Mu Is there an issue with batch-processing for ONNX models?
@triple-Mu Is there an issue with batch-processing for ONNX models?
7 means batchid,x0,y0,x1,y1,classid,score So you can get the image with batchid
See dynamic batch infer script in https://github.com/WongKinYiu/yolov7/blob/main/tools/YOLOv7-Dynamic-Batch-ONNXRUNTIME.ipynb~~~~
@triple-Mu yes, I can get I can distinguish by batchid. but it's not convenient,is there any way make the output batch size = input batch size?
@triple-Mu yes, I can get I can distinguish by batchid. but it's not convenient,is there any way make the output batch size = input batch size?
As the onnx nms operator introduction https://github.com/onnx/onnx/blob/main/docs/Operators.md#NonMaxSuppression here. We feed into nms op tensor as batched! The outputs of the nms op is used to distinguish by the first column. What you want to do may require complex operator implementations like for loops. I will continue to think about how to implement this solution
Thanks a lot, It doesn't seem easy, by the way ,why export torchscript not with nms ?