ThisisBillhe

Results 40 comments of ThisisBillhe

Do you use the imagenet reference batch provided [here](https://github.com/openai/guided-diffusion/tree/main/evaluations)?

You may refer to my other repos [PTQD](https://github.com/ziplab/PTQD) and [torch_quantizer](https://github.com/ThisisBillhe/torch_quantizer) (8-bit only)

you may refer to our [paper](https://arxiv.org/abs/2310.03270) for visualization results.

I do not train diffusion models from scratch. But you can follow my paper [PTQD](https://github.com/ziplab/PTQD) and [EfficientDM](https://github.com/ThisisBillhe/EfficientDM) to convert a well-trained full-precision diffusion model to a low-bit one.

Hi,大佬我是不敢当. 要不我们加个微信后续交流?wechat: hyfll2

Hi! You are right, I haven't done any operation fusion yet (both Conv+ReLU and Dequant+Quant). Another reason for slow inference speed is that my dequantization CUDA kernel is slow, which...

any specific config or problem you have met?

1. yes. 2. We use the corresponding FP model as the teacher model for BiViT.

Hi! Thanks for your interest in our work. I will look into it in a few days.