andrewor14

Results 23 comments of andrewor14

> can we write down all the special cases we make for lstm in prepare and convert step? this will help us to evaluate how well this is aligned with...

Ok, I'm merging this. Thanks for all the feedback!

Hi @datumbox, the FX graph mode quantization API should more or less be finalized at this point, cc'ing @jerryzh168 just to confirm. However, due to recent priority shifts I no...

Hi @Optimox, by the way we are currently looking into adding support for quantization-aware training (QAT) during finetuning. The goal is to still produce a quantized model for inference, but...

@andrewor14 has imported this pull request. If you are a Meta employee, you can view this diff [on Phabricator](https://www.internalfb.com/diff/D56740694).

@rohan-varma @ebsmothers can you help me land? Looks like I don't have write access

> Would this by any chance fix the difference in saved_mean and saved_rstd of batch_norm when training is False in cpu vs cuda? > > https://github.com/pytorch/pytorch/blob/7ad4ab4765f52cc917fdc1b587f5f6e6d3175cad/torch/_decomp/decompositions.py#L1657-L1663 > > It confuses...

@pytorchbot merge