yolov9
yolov9 copied to clipboard
how to export to onnx?
python export.py --weights yolov9-c.pt --include onnx
export: data=G:\Item_done\yolo\yolo5\yolov9\yolov9-main\data\coco.yaml, weights=['yolov9-c.pt'], imgsz=[640, 640], batch_size=1, device=cpu, half=False, inplace=False, keras=False, optimize=False, int8=False, dynamic=False, simplify=False, opset=12, verbose=False, workspace=4, nms=False, agnostic_nms=False, topk_per_class=100, topk_all=100, iou_thres=0.45, conf_thres=0.25, include=['onnx'] YOLOv5 2024-2-22 Python-3.9.16 torch-2.0.1+cu118 CPU
Fusing layers...
Model summary: 724 layers, 51141120 parameters, 0 gradients, 238.7 GFLOPs
Traceback (most recent call last):
File "G:\Item_done\yolo\yolo5\yolov9\yolov9-main\export.py", line 606, in
Export functions are not yet supported. The models should do re-parameterization first, then do export.
how to re-parameterization?
how to tensorrt forward? I need to export to onnx,then tensorrt.
The re-parameterization functions are provided in yolov7 repo, but currently we have not integrated them into this repo.
need to export onnx model,please integrated them into this repo quickly!
need to export onnx model,please integrated them into this repo quickly!
Modify this code "if isinstance(m, (Detect, V6Detect))" to "if isinstance(m, (Detect, DualDDetect))" ,and you can get the onnx
need to export onnx model,please integrated them into this repo quickly!
Modify this code "if isinstance(m, (Detect, V6Detect))" to "if isinstance(m, (Detect, DualDDetect))" ,and you can get the onnx
I don't think that really works
https://github.com/WongKinYiu/yolov9/pull/20 should fix this 👍 (+ converted models can be found here)
need to export onnx model,please integrated them into this repo quickly!
Modify this code "if isinstance(m, (Detect, V6Detect))" to "if isinstance(m, (Detect, DualDDetect))" ,and you can get the onnx
I don't think that really works
This solution works for me.I have completed the ONNX export, as well as inference with TRT and ONNX Runtime (ORT).Furthermore, the ONNX used by TRT includes an NMS node.
@xenova's solution works
I can confirm that @xenova exports work
(修改代码 “if isinstance(m, (Detect, V6Detect))” 改为 “if isinstance(m, (Detect, DualDDetect))” ,就可以得到 onnx)在进行如下修改后又得到一个新问题
Fusing layers...
Model summary: 724 layers, 51141120 parameters, 0 gradients, 238.7 GFLOPs
Traceback (most recent call last):
File "export.py", line 607, in
You can found how to export onnx model with the following https://github.com/AICVer/yolov9.infer
@xinsuinizhuan I used it: https://github.com/thaitc-hust/yolov9-tensorrt
I also first transfer weights.pt to ONNX with the NMS module, and then transfer to TensorRT Engine for inference.
Hello!! Fix the error shape = tuple((y[0] if isinstance(y, tuple) else y).shape) # model output shape AttributeError: 'list' object has no attribute 'shape' " Passing y directly to shape=y
y is a tuple but y[0] is a list. And this list has no .shape.
My export looked like this:
!python export.py --weights /content/drive/MyDrive/IAGeneration/Yolov9/yolov9/runs/train/exp3/weights/best.pt
--batch-size 1 --imgsz 640 --include torchscript onnx
My best.pt has 133.47 M While my best.onnx has 262.22 M. Is this normal?
The inference worked with:
!python detect.py --data /Yolov9/projdocyolov9red.yaml --weight best.onnx --conf 0.50 --source /data/imgs --device 0 --save-txt
But it took about 3 minutes to detect 100 images. This is normal?
need to export onnx model,please integrated them into this repo quickly!
Modify this code "if isinstance(m, (Detect, V6Detect))" to "if isinstance(m, (Detect, DualDDetect))" ,and you can get the onnx hai I tried this but i am getting- NameError: name 'DualDDetect' is not defined am I missing something?