MNN
MNN copied to clipboard
operation requirements
Known Op Requirements For TensorFlow
- AddN
- Equal (#42 )
- SparseToDense (#42 )
Ops(TensorArrayxxx) is from tensorflow Detection postprocess API, which are not necessary to support, because DetectionPostProcess is supported.
-
~~TensorArrayGatherV3 (#462)~~
-
~~TensorArrayGatherV3 (#462)~~
-
~~TensorArrayReadV3 (#462)~~
-
~~TensorArrayScatterV3 (#462)~~
-
~~TensorArraySizeV3 (#462)~~
-
~~TensorArrayV3 (#462)~~
-
~~TensorArrayWriteV3 (#462)~~
-
~~Pad (@rkshuai, @563816752 / #120 )~~
-
~~FusedBatchNorm (@RobertAuditore )~~
-
~~ResizeBilinear+UInt8 (@WenguoLi )~~
-
MirrorPad (@alcaster )
-
OneHot (@alcaster )
-
~~FakeQuantWithMinMaxVars (#146 @liguiyuan)~~
-
~~FusedBatchNormV3 (@leo-nullptr)~~
-
~~LeakyRelu (@KanaSukita)~~
-
~~StrideSlice (@MaybeShewill-CV )~~
-
~~Iterator/IteratorGetNext (@JiangtianPan )~~
-
~~Merge (@jimdinunzio)~~
-
~~ResizeBicubic (@jimdinunzio)~~
-
~~Switch (@jimdinunzio)~~
-
~~FusedPadConv2D (@jimdinunzio)~~
-
LogicalAnd (@JiangtianPan, @ChristineRYY)
-
LogicalOr (@ChristineRYY)
-
~~flatten (@ChisenZhang )~~
-
~~GRU Dense (@liziru )~~
Known Op Requirements For TensorFlow Lite
- Mul (@valwang )
- Transpose (@Teragump )
- Fully_Connected (@Teragump )
- LogisticTflite (@GrayRui )
- AddV2 (@yinguobing)
- SUB (@yinguobing)
- RELU (@wikipedia2008)
- Iterator/IteratorGetNext (@JiangtianPan )
Known Op Requirements For ONNX
- ~~PRelu (#9 )~~
- ~~LeakyReLU (@yizhaoyanbo, #144 @zhyj3038 )~~
- Neg (@ChisenZhang )
- ~~Mul (@563816752)~~
- ~~ReLU6 (@563816752 )~~
- ~~ELU (@MoonBunnyZZZ, @DHNicoles )~~
- ~~ReduceMean (@alcaster )~~
- ~~Slice (#144 @zhyj3038)~~
- ~~Sigmoid (#144 @zhyj3038)~~
- ~~ReduceSum (#144 @zhyj3038)~~
- ~~Split (@doodoo0006 )~~
- Expand (@leo-nullptr)
- ~~MatMul (@leo-nullptr, #257 @BokyLiu, @liuwuliuyun )~~
- MaxRoiPool (@92ypli)
- ~~LSTM (@Cheneng)~~
- ~~Pad (@yizhaoyanbo)~~
- ~~Slice (@pfeatherstone)~~
- ~~Cast (@pfeatherstone)~~
- ~~ConvTranspose (@bobzhang123, @yizhaoyanbo)~~
Known Op Requirements For Caffe
- route (@RobertAuditore)
- power (@OOYueshenOO)
- Upsample (@cyf518)
If you have any op requirements, you could write a comment to let us know, even if it's mentioned above. Please specify where the op comes from(Tensorflow, Caffe, or ONNX) like this:
Framework:
...
Not Supported OP:
...
power op
shuffle op
Constant op
Please specify where the op comes from(Tensorflow, Caffe, or ONNX) like this:
- Not Supported OP: ...
- Framework ...
- Not Supported OP:
pad clip LeakyReLU
- Framework:
ONNX
Not Supported OP: 18 Mul
Framework: TFLITE
Not Supported OP: shufflenet
Framework: pytorch or onnx
Not Supported OP: shufflenet
Framework: pytorch or onnx
ShuffleNet is a network not an op
Sorry, my meaning is ShuffleChannel.
Sorry, my meaning is ShuffleChannel.
ShuffleChannel is NOT a OP in pytorch、onnx or tensorflow! By the way tensorflow-shufflenet is tested. You can try.
not support OP: Neg Framework: ONNX
- Not Supported OP:
pad
- Framework:
tensorflow
Not Supported OP: Transpose Fully_Connected
Framework: TensorFlow Lite
Framework: onnx
Not Supported OP: Transpose
Framework: Tensorflow
Not Supported OP: FusedBatchNorm
Framework: caffe
Not Supported OP: route (yolo2)
Op Requirements For ONNX Mul ReLU6
Framework: onnx
Not Supported OP: Transpose
Not Supported OP: FusedBatchNormV3
Framework: Tensorflow
Not Supported OP: ResizeBilinear Uint8 quantized Mode
Error: Start to Convert Other Model Format To MNN Model... terminate called after throwing an instance of 'Error' what(): [16:44:02] /home/apuser/deeplearning/alibaba/MNN/tools/converter/source/tflite/ResizeBilinear.cpp:16: Check failed: !quantizedModel ==>
Framework: Tensorflow lite
Framework: ONNX(pytorch)
Not Supported OP: ELU
Framework: onnx
Not Supported OP: Transpose
@92ypli @tanhui2975 @kunyao2015 Transpose for ONNX is supported in latest commit.
@yizhaoyanbo @563816752 Pad, Clip for ONNX are supported in latest commit.
Not Supported OP: ReduceMean
Framework: ONNX(pytorch)
Not Supported OP: MirrorPad, OneHot
Framework: Tensorflow
Framework: TFLITE
Not Supported OP: LogisticTflite
Error: ./tools/coverter/source/tflite/LogisticTflite.cpp:15: Check failed: quantizedModel ==> LogisticTflite TODO(float)
Framework: Pytorch --> ONNX
Not Supported OP: Permute, dims >= 5
Frameword: tensorflow and tflite (object detection api)
Not Supported OP: concat, when using OpenCL as backend.
Specifically, we test mobilenet-ssd using MNN OpenCL. And there are "The Creator Don't support type 10, concat"
What's more, when the OpenCL-based inference finished, we get different results compared with CPU-based inference.
Framework: onnx Not Supported OP: Split
Framework: TensorFlow
Not Supported OP: ELU