rknn-toolkit2
rknn-toolkit2 copied to clipboard
YOLOv7 convert fail
Hi, Im faced with such a problem when trying to convert yolo.onnx model its process failed.
OS: Linux 20.4
architecture: x86_64
Here is code:
from rknn.api import RKNN
rknn = RKNN(verbose=True, verbose_file="./verbose_log")
ret = rknn.config(target_platform='rk3588')
if ret != 0:
print('config failed')
exit(ret)
ret = rknn.load_onnx(model="yolov7-ort-tiny.onnx")
if ret != 0:
print("load failed")
exit(ret)
ret = rknn.build(do_quantization=True)
if ret !=0:
print("build failed")
exit(ret)
name = "_quantized_rknn"
ret = rknn.export_rknn("yolov7_rknn.rknn")
if ret != 0:
print("export failed!")
exit(ret)
And here is output:
W __init__: rknn-toolkit2 version: 1.4.0-22dcfef4
I Save log info to: ./verbose_log
W load_onnx: The config.mean_values is None, zeros will be set for input 0!
W load_onnx: The config.std_values is None, ones will be set for input 0!
I base_optimize ...
I base_optimize done.
I
I fold_constant ...
E build: The input 0 of NonMaxSuppression('NonMaxSuppression_205') need to be constant! It will cause the graph to be a dynamic graph!
W build: ===================== WARN(3) =====================
E rknn-toolkit2 version: 1.4.0-22dcfef4
E build: Catch exception when building RKNN model!
E build: Traceback (most recent call last):
E build: File "rknn/api/rknn_base.py", line 1541, in rknn.api.rknn_base.RKNNBase.build
E build: File "rknn/api/graph_optimizer.py", line 567, in rknn.api.graph_optimizer.GraphOptimizer.fold_constant
E build: File "rknn/api/graph_optimizer.py", line 323, in rknn.api.graph_optimizer._dynamic_check
E build: File "rknn/api/rknn_log.py", line 113, in rknn.api.rknn_log.RKNNLog.e
E build: ValueError: The input 0 of NonMaxSuppression('NonMaxSuppression_205') need to be constant! It will cause the graph to be a dynamic graph!
build failed
Process finished with exit code 255
I observe the same problem. Does anyone know any workarounds?
I think you should remove the NMS layer from the model graph, this worked for me with another model.
I removed NMS layers from the YOLOv7 network, and it no longer has this error. However, it has another floating point exception (core dumped) error at the end of converting. anyone has any clues?
have you managed to resolve this?
any update this problem ? we have face the same error when using Yolov5
Has this problem been solved? I'm having the same problem with YOLOE.