Pulkit Bhuwalka
Pulkit Bhuwalka
@peri044 - The code is working for me. Can you please try using `tf-nightly` and try the code again? Also, consider using @joyalbin's snippet as well, though for `saved_model`, it...
Closing this for now since I was able to run the code without any issues, and it's likely a versioning issue. Please feel free to reopen otherwise.
Thanks @thecosta. We are investigating this.
Hi @CRosero, We haven't added support for quantizing Keras models within models yet. This is possible, and something we intend to do in the future. In the meanwhile, @kmkolasinski is...
Hi @Kyle719, I tried reproducing this, but I didn't see any errors. It converted just fine. Please make sure you use `tf-nightly`. [This](https://github.com/tensorflow/model-optimization/blob/master/tensorflow_model_optimization/python/core/quantization/keras/utils.py) should explain how the conversion is done.
Will update it, once we add support for it.
@CRosero - I fixed the code in your colab. Your Sequential model has not been constructed correctly - it was missing parentheses. It does not actually have any layers. That's...
Thanks @Xhark. Seems to me the last line `q_aware_model = quantize_model(q_base_model)` is not needed. `q_base_model` is already quantized, right?
It seems to me we are mixing a few issues. I want to make sure I understand the problem correctly. Please correct me if I'm wrong. [Issue](https://github.com/tensorflow/model-optimization/issues/40) is about handling...