onnx-tensorflow
onnx-tensorflow copied to clipboard
Pytorch->ONNX->TF-> SNPE Error for batchnorm variance
I converted Mobilenet V1 SSD Pytorch model to ONNX format(https://github.com/qfgaohao/pytorch-ssd) and from there to protobuf format using ONNX TF (TF==1.15). ONNX version == 1.3 and SNPE == 1.36
Now when trying to convert protobuf file to DLC format for SNPE, I am getting the following error
ERROR - Conversion failed: Cannot resolve BatchNorm layer due to missing variance value.
Upon googling, I found this link https://developer.qualcomm.com/forum/qdn-forums/software/snapdragon-neural-processing-engine-sdk/35033 which says batchnorm variance should be either Const or Identity.
Since I am newbie in TF, could someone help resolve the issue
The command to convert pb file to DLC format is
snpe-tensorflow-to-dlc --input_network mb.pb --input_dim input.1 "1,3,300,300" --out_node 'boxes' --out_node 'scores' --output_path mb.dlc --allow_unconsumed_nodes
I had the same problem too when converting even a simple CNN from PyTorch -> ONNX ->TF ->SNPE . My CNN is just composed of Conv2D, BatchNormalization, Pooling and Activation function. Then I tried another way to convert: PyTorch -> ONNX -> Caffe2 -> SNPE, and it worked fine. But now I need to convert another model which is yolov5 to SNPE, and sadly some of its layers are not supported when converting ONNX -> Caffe2. Therefore, I go back to try TF -> SNPE again and still get this error. Really need someone to solve this.
PyTorch 1.6.0 ONNX 1.7.0 Tensorflow 1.15.0 SNPE 1.40.0.2130
Is there any update of this issue?
A couple of updates:
- TF 1.15 is no longer supported by onnx-tf
- We need the code that leads to the error message and the onnx file to debug