Taehee Jeong

Results 15 comments of Taehee Jeong

Thanks for your interest in contribution! Please read [contribution instructions](https://github.com/tensorflow/model-optimization/blob/master/CONTRIBUTING.md) to take further steps. As this looks like a whole new feature, you also might want to [file an RFC](https://github.com/tensorflow/model-optimization/blob/master/CONTRIBUTING_TECHNIQUE.md)

Hi @ejcgt, can you confirm if this issue still happens on latest version?

Hi @haozha111, is there some updates?

Hi @jennakwon06, can you update?

Hi @Xhark , do you think it should be handled by MOT team or TF core team?

Not sure about different first layer in 1.15 vs 2.4. For weight difference - is the left result from TF 2.4? Recent TFLite supports per-channel quantization for conv-like ops, so...

Hi @arm-ssingh , I can't reproduce colab as I don't have the input file. Meanwhile, can you confirm that the model is 16x8 quantized? Need to inspect the model, but...

Hi @lovodkin93 , can you share a colab so we can reproduce easily? Also please let us know which TF and TF-MOT versions you're using.

Hi @Xhark, can you comment if nested model is supported now?

Hi @liyunlu0618 , can you check the current status?