Wenbing Li

Results 43 comments of Wenbing Li

Is torch.onnx doc enough for your question? https://pytorch.org/docs/stable/onnx.html

Since the keras has output shape, it's doable to calculate the actual padding size. And there is one reminder that onnxmltools needs to generate multiple version onnx model, e.g. any...

Yes, it's a known issue that the tf 2.x subclassed layer doesn't support the custom conversion, only tensorflow op could be converted in the custom mode.

with tf2.x, set_converter only supports to customize tf.op, with tf1.x, set_converter works on both tf.op and keras.layer. Is the above model source the issue happen?

No plan for that, welcome the contribution.

This is the API: keras2onnx.set_converter(, ) can be reference here: https://github.com/onnx/keras-onnx/tree/c4efae793dabd301bac23adea2230d5fe30482c7/keras2onnx/ke2onnx, a lot of example how layer was converted.

@jiafatom , if CRF layer looks a little common, we should a converter for this layer as well, like a demo code in the application folder.

Yes, Keras converter need to work with tensorflow.compat.v1.disable_tensor_equality(). you can re-enable it if it is bothering you.

@Gaploid, the issue is caused the lack of float16 support in onnxruntime python API. https://github.com/Microsoft/onnxruntime/blob/master/onnxruntime/python/onnxruntime_pybind_mlvalue.cc

If you already have a tensorflow model, why not try this tensorflow-onnx converter? https://github.com/onnx/tensorflow-onnx