使用MNN2QNNModel 报错Segmentation fault (core dumped)
我当前的一个静态的onnx模型,经过MNNConvert转换之后,再使用MNN2QNNModel转成QNN的产物,报错 param dims: 1x3x32x900 param dims: 1x3x32x900 Total input shape type size:1 [Temp Product]: Qnn temp product generate at /.../.../... CPU Group: [ 17 18 16 19 ], 800000 - 3600000 CPU Group: [ 0 3 5 7 15 1 12 14 2 4 6 13 ], 800000 - 4800000 CPU Group: [ 9 10 11 8 ], 800000 - 4900000 The device supports: i8sdot:0, fp16:0, i8mm: 0, sve2: 0, sme2: 0 Can't Find type=5 backend, use 0 instead Load Cache file error. input 0 shape:1 3 32 900 Segmentation fault (core dumped)
我的MNN版本是3.3.0 QNN版本是2.40
可以先把MNN2QNNModel.cpp的258行修改下: config.type = MNN_FORWARD_NN; 改为: config.type = MNN_CONVERT_QNN; 后续我们会修改MNN2QNNModel的转换工具。
可以先把MNN2QNNModel.cpp的258行修改下: config.type = MNN_FORWARD_NN; 改为: config.type = MNN_CONVERT_QNN; 后续我们会修改MNN2QNNModel的转换工具。
修改了之后,执行MNN2QNNModel工具,还是报错 [Pass]: qnn-model-lib-generator success! qnn-context-binary-generator pid:79124 <W> Initializing HtpProvider 0.0ms [ ERROR ] Dimension at index 1 cannot be broadcast between in[0] (192) and in[1] (225).
0.0ms [ ERROR ] Dimension at index 1 cannot be broadcast between in[0] (192) and in[1] (225).
0.0ms [ ERROR ] Dimension at index 1 cannot be broadcast between in[0] (192) and in[1] (225).
0.0ms [ ERROR ] Dimension at index 1 cannot be broadcast between in[0] (192) and in[1] (225).
0.0ms [ ERROR ] Op specific validation failed.
189.6ms [ ERROR ] <E> validateNativeOps master op validator BinaryOp_I_9_10_O_12:qti.aisw:ElementWiseAdd failed 3110
189.6ms [ ERROR ] <E> QnnBackend_validateOpConfig failed 3110
189.6ms [ ERROR ] <E> Failed to validate op BinaryOp_I_9_10_O_12 with error 0xc26
[ ERROR ] QnnModel::addNode() validating node BinaryOp_I_9_10_O_12 failed. [ ERROR ] all_in_one_rec_quant_900_0.addNode(QNN_OPCONFIG_VERSION_1, "BinaryOp_I_9_10_O_12", "qti.aisw", "ElementWiseAdd", params_BinaryOp_I_9_10_O_12, 0, inputs_BinaryOp_I_9_10_O_12, 2, outputs_BinaryOp_I_9_10_O_12, 1) expected MODEL_NO_ERROR, got MODEL_GRAPH_OP_VALIDATION_ERROR 189.6ms [ ERROR ] Failed in composeGraphs() 189.7ms [ ERROR ] ComposeGraphs Failed with error = 1 Graph Compose failure <W> Backend 1 free cleanup called during process exit [Error]: qnn-context-binary-generator error!
qnn-model-lib-generator使用成功了,但是qnn-context-binary-generator 失败
方便提供模型用来测试吗
方便提供模型用来测试吗
这个确实不方便,还有个问题,如果我使用MNNConvert --saveStaticModel true --inputConfigFile /config.txt 转换出来一个静态的模型,再使用quantized.out进行量化,json文件如下 { "format":"BGR", "mean":[0.5, 0.5, 0.5], "normal":[0.5, 0.5, 0.5], "width":900, "height":32, "path":"/imgs/", "feature_quantize_method":"KL", "weight_quantize_method":"ADMM", "input_type":"image" }
也是会报段错误 CPU Group: [ 17 18 16 19 ], 800000 - 3600000 CPU Group: [ 0 3 5 7 15 1 12 14 2 4 6 13 ], 800000 - 4800000 CPU Group: [ 9 10 11 8 ], 800000 - 4900000 The device supports: i8sdot:0, fp16:0, i8mm: 0, sve2: 0, sme2: 0 [11:07:06] /home/host/MNN-master/tools/quantization/Helper.cpp:137: used dataset num: 1000 [11:07:07] /home/host/MNN-master/tools/quantization/calibration.cpp:1053: fake quant weights done. Segmentation fault (core dumped)
请问这又是什么原因呢
能获取到崩溃的堆栈信息吗
能获取到崩溃的堆栈信息吗
并没有,只有段错误这一行
这可能是模型中间某些op转换出来的lib有点问题,具体需要根据模型来定位了。