rknn-toolkit2
rknn-toolkit2 copied to clipboard
This model I ran in the PC RKNN simulation is through, no failed to submit, but in the 3588 board with rknn-toolkit-lite run will have problems. This is an EEGnet...
We try to convert an onnx model with batch>1 on rk3688. The conversion works flowless. However, when we try to run the model, we got an error as follows: meet...

I converted pp_lite_seg_stdc1 and pp_lite_seg_stdc2 model to rknn (with target platform set to rk3588s) using rknn-toolkit2 on linux x86_64 PC, the conversion process is fine and the result rknn model...
E init_runtime: Catch exception when init runtime! E init_runtime: Traceback (most recent call last): E init_runtime: File "rknn/api/rknn_base.py", line 2502, in rknn.api.rknn_base.RKNNBase.init_runtime E init_runtime: File "rknn/api/rknn_runtime.py", line 391, in rknn.api.rknn_runtime.RKNNRuntime.build_graph...
onnx模型及导出的rknn模型如下网盘: ``` 链接:https://pan.baidu.com/s/1qm7Q6Dr1yD8CBq9yzoYsvQ?pwd=rknn 提取码:rknn ``` onnx导出fp16的rknn模型没有出现报错;在do_quantization的时候出现Error,但仍能导出int8模型,报错如下:  但是报错仍然能导出int8的模型; 在3588上进行推理,发现该模型似乎能够推理出正确的输出形状,但速度相当慢,并报错算子问题: 
I convert the rknn model from onnx model successfully, but during testing, the results from rknn cannot aligned to onnx. All the ops of my model follows https://github.com/rockchip-linux/rknn-toolkit2/blob/master/doc/RKNN-Toolkit2_OP_Support-1.6.0.md.  How...
ERROR: Invalid requirement: 'rknn-toolkit2==1.4.0-22dcfef4': Expected end or semicolon (after version specifier) rknn-toolkit2==1.4.0-22dcfef4
请问我转https://huggingface.co/Xenova/nllb-200-distilled-600M/tree/main/onnx这个为rknn,出现了这个问题,应该怎么解决呢 No lowering found for: /model/decoder/embed_positions/CumSum, node type = CumSum, use CustomOperatorLower instead. E RKNN: [16:59:26.262] dataconvert type -1 is unsupport in current!