onnx2pytorch
onnx2pytorch copied to clipboard
A lot of Ops with their implementations.
Hi, developer team of onnx2pytorch. I am currently developing an neural network quantization framework: https://github.com/openppl-public/ppq/tree/master/ppq. The really interesting part is that we both need to run an onnx model with pytorch : ) I am glad to share our operator implementations with you: https://github.com/openppl-public/ppq/blob/master/ppq/executor/op/torch/default.py
We support following onnx operators by now(Still work in progress):
- 'Abs': Abs_forward,
- 'AdaptiveAvgPool2d': AdaptiveAvgPool2d_forward,
- 'And':And_forward,
- 'Add': Add_forward,
- 'ArgMax': ArgMax_forward,
- 'AveragePool': AveragePool_forward,
- 'BatchNormalization': BatchNormalization_forward,
- 'Cast': Cast_forward,
- 'Clip': Clip_forward,
- 'Concat': Concat_forward,
- 'Constant': Constant_forward,
- 'ConstantOfShape': ConstantOfShape_forward,
- 'Conv': Conv_forward,
- 'ConvTranspose': ConvTranspose_forward,
- 'Cos': Cos_forward,
- 'Div': Eltwise_forward,
- 'Equal': Equal_forward,
- 'Exp': UnaryEltwise_forward,
- 'Expand': Expand_forward,
- 'Flatten': Flatten_forward,
- 'Gather': Gather_forward,
- 'GatherElements': Gather_forward,
- 'GatherND': GatherND_forward,
- 'Gelu': Gelu_forward,
- 'Gemm': Gemm_forward,
- 'grid_sampler': Grid_sampler_forward,
- 'GlobalAveragePool': AveragePool_forward,
- 'GlobalMaxPool': MaxPool2d_forward,
- 'Greater': Greater_forward,
- 'LayerNorm': LayerNorm_forward,
- 'LeakyRelu': LeakyRelu_forward,
- 'Less': Less_forward,
- 'LogSoftmax': LogSoftmax_forward,
- 'MatMul': MatMul_forward,
- 'Max': Eltwise_forward,
- 'MaxPool': MaxPool2d_forward,
- 'Min': Eltwise_forward,
- 'Mul': Mul_forward,
- 'MultiHeadAttention': MultiHeadAttention_forward,
- 'NonMaxSuppression': _NMS_forward,
- 'NonZero': NonZero_forward,
- 'Not': Not_forward,
- 'Pad': Pad_forward,
- 'PRelu': PRelu_forward,
- 'Range': Range_forward,
- 'ReduceL2': ReduceL2_forward,
- 'ReduceMax': ReduceMax_forward,
- 'ReduceMean': ReduceMean_forward,
- 'ReduceSum': ReduceSum_forward,
- 'Relu': UnaryEltwise_forward,
- 'Reshape': Reshape_forward,
- 'Resize': Resize_forward,
- 'ScatterElements': ScatterElements_forward,
- 'ScatterND': ScatterND_forward,
- 'Shape': Shape_forward,
- 'Sigmoid': UnaryEltwise_forward,
- 'Sin': Sin_forward,
- 'Slice': Slice_forward,
- 'Softmax': Softmax_forward,
- 'Softplus': Softplus_forward,
- 'Split': Split_forward,
- 'Squeeze': Squeeze_forward,
- 'Sub': Eltwise_forward,
- 'Tile': Tile_forward,
- 'TopK': TopK_forward,
- 'Transpose': Transpose_forward,
- 'Unsqueeze': Unsqueeze_forward,
- 'Where': Where_forward,
- 'Sqrt': Sqrt_forward,
- 'Log': Log_forward,
- 'Floor': Floor_forward,
- 'RoiAlign': RoiAlign_forward,
- 'MMCVRoiAlign': MMCVRoiAlign_forward,
- 'SpaceToDepth': SpaceToDepth_forward,
- 'DepthToSpace': DepthToSpace_forward,
- 'Scale': Scale_forward, # caffe op
- 'Tanh': Tanh_forward,
- 'Pow': Pow_forward,
- 'Crop': Crop_forward, # caffe op
- 'ChannelShuffle': ChannelShuffle_forward, # caffe op
- 'InstanceNormalization': InstanceNormalization_forward,
- 'Parameter': Parameter_forward, # caffe op
- 'Interp': Interp_forward, # caffe op
- 'CaffeArgMax': CaffeArgMax_forward, # caffe op
- 'HardSigmoid': HardSigmoid_forward,
- 'HardSwish': HardSwish_forward,
- 'Neg': Neg_forward,
- 'GRU': GRU_forward,
- 'PPQDeviceSwitch': PPQDeviceSwitch_forward,
- 'Identity': Identity_forward,
- 'OneHot': Onehot_forward,
- 'Reciprocal': Reciprocal_forward,
- 'LSTM': LSTM_forward,