Pulkit Bhuwalka

Results 19 comments of Pulkit Bhuwalka

Sure. I'll take a look.

Hi @madarax64, Haven't started working on this yet. We are planning another release in the next 1-2 months and this should be part of it.

You're welcome. I'm just reopening it to make sure it stays open until I actually fix it, and so I don't forget :)

Hi @marno1d, I just pushed a new release yesterday which has support for `SeparableConv1D`. Full support for `Conv1D` is still pending, but you can do some work around it by...

https://github.com/tensorflow/model-optimization/blob/48c08d13629ff062ce1720d53a035bbfa0331b83/tensorflow_model_optimization/python/core/quantization/keras/default_8bit/quantize_numerical_test.py#L112 https://github.com/tensorflow/model-optimization/blob/48c08d13629ff062ce1720d53a035bbfa0331b83/tensorflow_model_optimization/python/core/quantization/keras/default_8bit/quantize_numerical_test.py#L135

Adding support for `Conv1D` standalone should be fairly simple. `Conv1D` internally uses the `Conv2D` op which is basically supported through training and conversion. You just need to write a test...

@jennakwon06 - Yes. For Conv1D, it should be the same as the Conv2D option. Internally there's just a conv2d op, no conv1d op.

No, it won't be able to. You'll have to write custom transforms for that, similar to the Conv2D BatchNorm transforms. And you would have to ensure that the converter is...

Yes, you can use the Conv2D one as a guide. Perhaps, even start with modifying it. Might just work with that. Thanks!

@Xhark - do we have any plans to add support for this? Others, please try the rest of the bug to use custom quantization for support.