Deyu Huang

Results 51 comments of Deyu Huang

@masahi sorry for the late reply. The reason why the [`FakeQuantWithMinMaxArgs`](https://tensorflow.google.cn/versions/r1.15/api_docs/python/tf/quantization/fake_quant_with_min_max_vars) only supports unit8 is the quantization range belonged. ``` inputs values are quantized into the quantization range ([0; 2^num_bits...

Could you also provide a simple repro script python code? This issue is related to this line [tensor.py#L785 StridedSlice](https://github.com/onnx/tensorflow-onnx/blob/5cd3b5b87ca8fbe38e90d2ecce6f9bda891792bf/tf2onnx/onnx_opset/tensor.py#L785).

This was a gemm_rewriter to rewriter the `MatMul + Add` op to `GEMM` . https://github.com/onnx/tensorflow-onnx/blob/master/tf2onnx/rewriter/gemm_rewriter.py. You can just remove the related code from this PR [add gemm_rewriter and the corresponding...

Someone posted a request https://github.com/onnx/onnx/issues/4057 to ONNX community.

Closed. Cause this issue was fixed by changing the original model and was traced in ONNX community.

Now the ONNX doesn't support ragged tensors. Converter can convert rank-1 ragged tensors but currently can't do larger than rank-1. We could potentially support higher ranks with a loop, but...

I don't find the same shape `Conv2d` op with the model graph you provide. Could you also check the model and send the repro conversion code?

> > I don't find the same shape `Conv2d` op with the model graph you provide. Could you also check the model and send the repro conversion code? > >...

It seems from PR commit [fix operator for fakequantize](https://github.com/onnx/tensorflow-onnx/commit/e3bc51c688e6e73b575b9bb127efee3fb32cd7df#diff-3c122aaf1f537d28d174a17c7f55671aba381a200d4e5d3736329ad29df222deR32) and output type constraint could be int8 in ONNX spec [quantizelinear-13](https://github.com/onnx/onnx/blob/main/docs/Changelog.md#quantizelinear-13). Need to do more investigation. Hi @xadupre, could you please...