David Fan

Results 102 comments of David Fan

Is there any reason that you want to set the batch size? The keras model itself has batch size `None` so we convert the onnx model with the unknown batch...

Using `quantize_model` causes the model input tensor is not found in the model tf graph. Need further solution to handle this.

Please try keras2onnx master directly to see if it works.

There is no active work on sparkml converter. Contributions are welcome.

We have CLI for keras2onnx converter that users can directly use [here](https://github.com/onnx/keras-onnx/blob/eb3f02f9c1550a2e2ead0d637f2d8e698798637a/keras2onnx/cli.py). For other converters, welcome to contribute :)

You only need specify `channel_first_inputs` in `convert_keras`, otherwise you don't need specify.

tf 2.4.1 is not supported, can you try tf 2.2.0?

You need specify the function `customModel` as a parameter `custom_conversion_functions` in `convert_coreml`, otherwise it does not know what `customModel` is.