torch_quantizer
torch_quantizer copied to clipboard
torch_quantizer is a out-of-box quantization tool for PyTorch models on CUDA backend, specially optimized for Diffusion Models.
Hello, I tried to run this project on SDXL, and the inference speed of int8 model is slower than that of the fp16. In the experiment on the A10 GPU,...
大佬你好,我最近在参考你的 cutlass 写一个 int8 quantized 的 conv。在你的代码中,int8 conv 和 dequantize 是两个 kernel。我想将 DQ fuse 到 conv 中作为一个 kernel 执行以节省访存。我希望借助 DefaultConv2dFprop 中的 EpilogueOp 来实现,也就是将 alpha 设为 input_scale * per_channel_weight_scale,计算公式是 alpha *...
Add conv-relu fushion at myInt8ConvCUDA Set in_features & out_features to org_module in&out features at FakeQuantModule Made qlinear_8bit_Linherit and qconv_8bit_Cinherit directly inherited from nn.Linear and nn.Conv2d
I push SD performance to the maximum. Currently I can generate 200 images per second on my 4090 when using 1 step sd-turbo, the onediff compiler, the stable-fast compiler, and...