DeepSpeed icon indicating copy to clipboard operation
DeepSpeed copied to clipboard

ZeroQuant quantization kernels and LKD

Open sdpmas opened this issue 1 year ago • 10 comments

Hi,

I was trying out the compression library for ZeroQuant quantization (for GPT-J model). While I was able to compress the model, I didn't see any throughput/latency gain from the quantization during inference. I have a few questions regarding this:

  • Do you guys have any guide to running inference on compressed models(especially ZeroQuant)? InferenceEngine only seems to support Mixture-of-Quantization but not ZeroQuant. I also tried int8 quantization without using compression module as shown in the code snippet below but end up getting CUDA error: an illegal memory access error
  • Have you guys released the fused kernels for GeLU+Quantize and GeMM+dequantize proposed in the ZeroQuant paper yet?
  • Any tentative release date for Layer-by-layer Knowledge Distillation?
  • What's the motivation for multiplying quantized input by scale here? Wouldn't that dequantize inputs?
injection_policy={gptj_transformer: 
                          module_inject.replace_policy.HFGPTJLayerPolicy}

model = deepspeed.init_inference(
    model,
    mp_size=world_size,
    dtype=torch.int8,
    quantization_setting=2,
    replace_with_kernel_inject=True,
    injection_policy=injection_policy,
)

Any help would be appreciated.

sdpmas avatar Aug 10 '22 15:08 sdpmas

Looks like the inference kernels for zeroquant is not released.

gsujankumar avatar Aug 11 '22 16:08 gsujankumar

@gsujankumar have you by any chance been able to quantize gpt-x models like gpt-2 or gpt-j?

sdpmas avatar Aug 11 '22 17:08 sdpmas

Hi,

The engine of ZeroQuant inference is not released yet. The code example in DeepSpeed-Example is only to help verify the accuracy of ZeroQuant.

The kernel/engine released is on our calendar and we are actively working on it to make it compatible for various models. Please stay tuned.

For LKD, we will also release it soon.

For the last question, the code for training or accuracy testing is different than the final inference engine. Here, everything is simulated, so we can do quantization aware training or other things

yaozhewei avatar Aug 11 '22 18:08 yaozhewei

thanks for replying back @yaozhewei. Do you think you could provide any estimation on when the ZeroQuant inference will be released? any rough estimation would help!

sdpmas avatar Aug 11 '22 18:08 sdpmas

i have the same questions, is there any guide to running inference on compressed models(especially ZeroQuant)? Any help would be appreciated.

xk503775229 avatar Sep 07 '22 06:09 xk503775229

hi ,when the ZeroQuant inference will be released?

xk503775229 avatar Sep 15 '22 13:09 xk503775229

@yaozhewei any news on this?

david-macleod avatar Oct 20 '22 18:10 david-macleod

@david-macleod LKD example is just released (not merged yet): https://github.com/microsoft/DeepSpeedExamples/pull/214

For kernel, please stay tuned

yaozhewei avatar Nov 02 '22 01:11 yaozhewei

Thanks @yaozhewei! Do you know whether there is a rough timeline for this? e.g. 1 month, 6 months, 1 year? It would be very useful to know as we'd like to decide where to wait or explore other options. Thanks again!

david-macleod avatar Nov 02 '22 05:11 david-macleod

I have the same problem, after zero-quant with DeepSpeed-Example reposity's code, but didn't see any throughput/latency gain from the quantization during inference, it only have model size decrease. the inference kernels for zeroquant have released at now?

HarleysZhang avatar Apr 21 '23 12:04 HarleysZhang

@yaozhewei any update on this? Is the engine of ZeroQuant inference released?

aakejiang avatar Jun 01 '23 07:06 aakejiang

@yaozhewei the newest deepspeed>=0.9.0 can't run any model int INT8, many issue opened not solved yet. Can you tell us which version of deepspeed can run int8 model? I just want to reproduce the result in your paper ZeroQuant

Moran232 avatar Jun 22 '23 05:06 Moran232