Results 39 comments of Aaqib

This works for me, may be it is useful for someone! ``` def create_quantization_model(model): layers = [] for i in range(len(model.layers)): if isinstance(model.layers[i], tf.keras.models.Model): quant_sub_model = tf.keras.models.clone_model(model.layers[i], clone_function= apply_quantization) layers.append(tfmot.quantization.keras.quantize_apply(quant_sub_model))...

Can you try creating your model without any quantization first? Then call: `q_model = tf.keras.models.clone_model(model, clone_function= apply_quantization)`, where `apply_quantization` should annotate every layer you want quantize with `tfmot.quantization.keras.quantize_annotate_layer`.

I think quantization does not really go recursively for models that contains other models (in your case main model contains other sequential models). Did you try passing your model to...

Great. Check this to implement `call`: https://keras.io/guides/customizing_what_happens_in_fit/ There is GAN example at the end of the page, that would be useful.

Any updates on this to fully support custom training loop and pruning via fit function? @alanchiao does the example code you provided work to prune model with custom training loop?

Thank you. For the time being, I am leaving this issue opened.

Please submit a pull request with test cases.

Currently I am busy, I will look into this later to understand/verify the issue.