Thai Nguyen
Thai Nguyen
Hi Matty, Passing `converter.representative_dataset = representative_dataset` is only required for post-training quantization. If you want to use QAT, follow the guide at https://www.tensorflow.org/model_optimization/guide/quantization/training_example( use `quantize_model` before training and train it...
The inference_input_type and inference_output_type is to use int8 input and output actually.
Can you share or describe what your output model looks like?
@Xhark Could you check if the Matt is QAT-ing the right way?
@Xhark Could you take a look.
@Xhark could you take a look at this issue?
@rino20 Could you take a look?
Isn't the original model weights in checkpoint files or variables/ directory?
I don't think we have an official way for that. This use with TensorRT isn't in our set of use-cases for now.
Jaehong is better in QAT and Keras-related things. @Xhark Could you take a look?