rknn-toolkit2 icon indicating copy to clipboard operation
rknn-toolkit2 copied to clipboard

如何查询rknn支持的所有算子?

Open SoulProficiency opened this issue 1 year ago • 4 comments

请问在哪里可以查找到目前rk支持的所有算子? 目前尝试将efficientVit-sam(encoder-decoder架构)移植到rknn平台上,官方训练好的torch模型可以导出onnx模型,目前想将onnx转换为rknn模型,其中涉及到算子是否支持等问题,以下是转换encoder的代码:

from __future__ import absolute_import, print_function, division
import os
from rknn.api import RKNN

# onnx_model = './resource/onnx/model_256x256_max_mscf_0.924553.onnx'G:/6666Ground_segmentation0813
onnx_model = './weights/l0_encoder.onnx' 
save_rknn_dir = './weights'

if __name__ == '__main__':

    # Create RKNN object
    rknn = RKNN()

    # pre-process config
    print('--> Config model')
    # rknn.config(mean_values=[[83.0535, 94.095, 82.1865]], std_values=[[53.856, 54.774, 53.9325]], reorder_channel='2 1 0', target_platform=['rk3588'], batch_size=1,quantized_dtype='asymmetric_quantized-u8') 
    rknn.config(mean_values=[[0.0, 0.0, 0.0]], std_values=[[255, 255, 255]], target_platform='rk3588')
    print('done')
    model_name = onnx_model[onnx_model.rfind('/') + 1:]
    # Load ONNX model
    print('--> Loading model %s' % model_name)
    ret = rknn.load_onnx(model=onnx_model)
    if ret != 0:
        print('Load %s failed!' % model_name)
        exit(ret)
    print('done')
    # Build model
    print('--> Building model')
    # ret = rknn.build(do_quantization=False, dataset='./quantization_dataset.txt', pre_compile=False)
    ret = rknn.build(do_quantization=False, dataset=None)
    if ret != 0:
        print('Build net failed!')
        exit(ret)
    print('done')

    # Export RKNN model
    print('--> Export RKNN model')
    # save_name = model_name.replace(os.path.splitext(model_name)[-1], "_no_quant.rknn")
    save_name = model_name.replace(os.path.splitext(model_name)[-1], ".rknn")
    ret = rknn.export_rknn(os.path.join(save_rknn_dir, save_name))
    if ret != 0:
        print('Export rknn failed!')
        exit(ret)
    print('done')

    rknn.release()

转换过程中爆出如下错误:

E build: Catch exception when building RKNN model!
E build: Traceback (most recent call last):
E build:   File "rknn/api/rknn_base.py", line 1546, in rknn.api.rknn_base.RKNNBase.build
E build:   File "rknn/api/graph_optimizer.py", line 1344, in rknn.api.graph_optimizer.GraphOptimizer.fuse_ops
E build:   File "rknn/api/graph_optimizer.py", line 627, in rknn.api.graph_optimizer.GraphOptimizer.fold_constant
E build:   File "rknn/api/session.py", line 28, in rknn.api.session.Session.__init__
E build:   File "rknn/api/session.py", line 71, in rknn.api.session.Session.sess_build
E build:   File "/usr/local/lib/python3.6/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 335, in __init__
E build:     self._create_inference_session(providers, provider_options, disabled_optimizers)
E build:   File "/usr/local/lib/python3.6/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 370, in _create_inference_session
E build:     sess = C.InferenceSession(session_options, self._model_bytes, False, self._read_config_from_model)
E build: onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Node (/image_encoder/backbone/stages.0/op_list.1/main/conv2/conv/Conv) Op (Conv) [ShapeInferenceError] Input tensor must have atleast 2 dimensions
build model failed.

目前猜测是因为rk不支持reshape,unsqueeze,slice等操作,目前想向官方确定如下两个问题: 1.在哪里可以获取rk支持的算子列表 2.目前大模型在学术界经过优化,可以在边缘端设备进行部署(jetson 系列等设备),rk平台该如何进行部署呢?

SoulProficiency avatar Feb 18 '24 08:02 SoulProficiency

我也想知道算子列表

JokerJostar avatar Mar 14 '24 13:03 JokerJostar

我也想知道算子列表

参考doc里面的op文档

SoulProficiency avatar Mar 15 '24 01:03 SoulProficiency

我也想知道算子列表

参考doc里面的op文档

好的,谢谢

JokerJostar avatar Mar 15 '24 08:03 JokerJostar

您好,我遇到类似的问题,请问题主后续有解决将efficientVit-sam的onnx移植到rknn上的问题吗?

dbooning avatar Apr 28 '24 08:04 dbooning