onnxruntime 在C++上如何实现fp16的推理 yolov5模型
Describe the issue
我在推理fp16的yolov5模型时通过
这样转换出来推理不出结果是为什么
To reproduce
Urgency
No response
Platform
Linux
OS Version
22.0
ONNX Runtime Installation
Built from Source
ONNX Runtime Version or Commit ID
1.10.1
ONNX Runtime API
C++
Architecture
X86
Execution Provider
Default CPU
Execution Provider Library Version
No response
Model File
yolov5s官网的模型
Is this a quantized model?
Yes
@hkdddld Could you please share the entire reproducer in text format so that I can execute it?
您能否以文本格式共享整个复制器,以便我可以执行它? Uploading YOLOV5.txt…
This issue has been automatically marked as stale due to inactivity and will be closed in 30 days if no further activity occurs. If further support is needed, please provide an update and/or more details.
楼主解决了吗,c++ fp16超分模型也推理错误了,python是可以正常推理的
onnxruntime is 1.18.1 now. Do you encounter the same issue with latest commit?
楼主解决了吗,C++ FP16超分模型也推理错误了,python是可以正常推理的
没有解决
onnxruntime 现在是 1.18.1。您在最新提交时是否遇到同样的问题?
现在没试过
Could you use the latest commit on main or release (i.e., 1.18.1) and see whether the issue is gone?
這是我找到的參考專案 https://github.com/Amyheart/yolov5v8-dnn-onnxruntime ,裡面支援YOLO_ORIGIN_V5_HALF(FP16),但實際上運行起來要比FP32的yolov5.onnx要慢,我的顯示卡是RTX4070。