MATTYGILO

Results 67 comments of MATTYGILO

Ok this seems to work: ``` model = MobileNetV3Large( input_shape=(224, 224, 3), include_top=True, weights=None, minimalistic=True,

Additionally, I have noticed that this also seems to work for MobileNetV3Small even though in the article earlier it only states MobileNetV3Large

Ok now I run this code: ``` import numpy as np import tensorflow as tf def representative_dataset(): for _ in range(1000): data = np.random.choice([0, 255], size=(1, 224, 224, 1)) yield...

@thaink I have followed the guides. However I'm using tflite micro which requires full int 8. In none of the examples does it show what to do for full int...

@thaink I've already set those values. Are you suggesting I train on quantised data?

@thaink Its a yamnet, I followed this medium post [https://medium.com/@antonyharfield/converting-the-yamnet-audio-detection-model-for-tensorflow-lite-inference-43d049bd357c](https://medium.com/@antonyharfield/converting-the-yamnet-audio-detection-model-for-tensorflow-lite-inference-43d049bd357c)

@thaink I've converted the model with full int 8 but the output of the model is complete rubbish. So I did QAT, of which I have converted to full int...

@thaink What is the suggest way of doing full int8 QAT on a model

@thaink This is how I QAT. ``` import tensorflow_model_optimization as tfmot LastValueQuantizer = tfmot.quantization.keras.quantizers.LastValueQuantizer MovingAverageQuantizer = tfmot.quantization.keras.quantizers.MovingAverageQuantizer class DefaultDenseQuantizeConfig(tfmot.quantization.keras.QuantizeConfig): # List all of your weights weights = { "kernel": LastValueQuantizer(num_bits=8,...