tensorrt_inference
tensorrt_inference copied to clipboard
export p7 to onnx format
I am runing this
python3 models/export_onnx.py --weights ./weights/yolov4-p7.pt --img-size 1536
to convert a p7 to onnx format but it times out. Here is the resulting output:
Namespace(batch_size=1, img_size=[1536, 1536], weights='./weights/yolov4-p7.pt')
Fusing layers... Model Summary: 503 layers, 2.87475e+08 parameters, 2.7862e+08 gradients
Starting ONNX export with onnx 1.9.0...
^C
There is an automatic interrupt. What can I do to get the conversion working.