jenchun-potentialmotors
Results
1
issues of
jenchun-potentialmotors
### Describe the issue After quantization, the output ONNX model had faster inference speed and smaller model size, but why are the input and output tensors still float32? I thought...
quantization