Siyuan Liu
Siyuan Liu
Thanks @sdasgup3 for pointing out we need to generate custom call to `mhlo.uniform_de/quantize`, to accommodate the incoming `stablehlo.uniform_quantize/dequantize` in HLO->MHLO converter. Put the note here for reference.
> The `v2.3.0-rc` is not out yet for pytorch, does this cause any concern if we make the change ahead of the tag creation? My understanding is the build will...
cc @bhavya01 who shadows this release.
Do not merge until upstream PyTorch has the `rc` tag created, so we can use the real `rc` number in our build.
Upstream already has rc2, update the version to rc2 in tf config file.
Hi @Nullkooland, thank you for reporting the issue! Upstream introduced a BC breaking [change](https://github.com/pytorch/pytorch/pull/118701/files), in which the `fp->quant` pair will be folded by default. As you mentioned `Qparam` + `DeQuant`...
> Hi @Nullkooland, thank you for reporting the issue! > > Upstream introduced a BC breaking [change](https://github.com/pytorch/pytorch/pull/118701/files), in which the `fp->quant` pair will be folded by default. As you mentioned...
> > Hi @Nullkooland, thank you for reporting the issue! > > Upstream introduced a BC breaking [change](https://github.com/pytorch/pytorch/pull/118701/files), in which the `fp->quant` pair will be folded by default. As you...