DeepSpeed
DeepSpeed copied to clipboard
ZeroQuant quantization kernels and LKD
Hi,
I was trying out the compression library for ZeroQuant quantization (for GPT-J model). While I was able to compress the model, I didn't see any throughput/latency gain from the quantization during inference. I have a few questions regarding this:
- Do you guys have any guide to running inference on compressed models(especially ZeroQuant)? InferenceEngine only seems to support Mixture-of-Quantization but not ZeroQuant. I also tried int8 quantization without using compression module as shown in the code snippet below but end up getting
CUDA error: an illegal memory access
error - Have you guys released the fused kernels for GeLU+Quantize and GeMM+dequantize proposed in the ZeroQuant paper yet?
- Any tentative release date for Layer-by-layer Knowledge Distillation?
- What's the motivation for multiplying quantized input by scale here? Wouldn't that dequantize inputs?
injection_policy={gptj_transformer:
module_inject.replace_policy.HFGPTJLayerPolicy}
model = deepspeed.init_inference(
model,
mp_size=world_size,
dtype=torch.int8,
quantization_setting=2,
replace_with_kernel_inject=True,
injection_policy=injection_policy,
)
Any help would be appreciated.
Looks like the inference kernels for zeroquant is not released.
@gsujankumar have you by any chance been able to quantize gpt-x models like gpt-2 or gpt-j?
Hi,
The engine of ZeroQuant inference is not released yet. The code example in DeepSpeed-Example is only to help verify the accuracy of ZeroQuant.
The kernel/engine released is on our calendar and we are actively working on it to make it compatible for various models. Please stay tuned.
For LKD, we will also release it soon.
For the last question, the code for training or accuracy testing is different than the final inference engine. Here, everything is simulated, so we can do quantization aware training or other things
thanks for replying back @yaozhewei. Do you think you could provide any estimation on when the ZeroQuant inference will be released? any rough estimation would help!
i have the same questions, is there any guide to running inference on compressed models(especially ZeroQuant)? Any help would be appreciated.
hi ,when the ZeroQuant inference will be released?
@yaozhewei any news on this?
@david-macleod LKD example is just released (not merged yet): https://github.com/microsoft/DeepSpeedExamples/pull/214
For kernel, please stay tuned
Thanks @yaozhewei! Do you know whether there is a rough timeline for this? e.g. 1 month, 6 months, 1 year? It would be very useful to know as we'd like to decide where to wait or explore other options. Thanks again!
I have the same problem, after zero-quant with DeepSpeed-Example reposity's code, but didn't see any throughput/latency gain from the quantization during inference, it only have model size decrease. the inference kernels for zeroquant have released at now?
@yaozhewei any update on this? Is the engine of ZeroQuant inference released?
@yaozhewei the newest deepspeed>=0.9.0 can't run any model int INT8, many issue opened not solved yet. Can you tell us which version of deepspeed can run int8 model? I just want to reproduce the result in your paper ZeroQuant