intel-extension-for-transformers icon indicating copy to clipboard operation
intel-extension-for-transformers copied to clipboard

Support inference with WOQ and LoRA adapter

Open Yuan0320 opened this issue 10 months ago • 3 comments

Hi itrex team, thanks for the great work!

I've been experimenting with the Weight Only Quantization (WOQ) from ITREX, following the provided examples in weightonlyquant.md#example-for-cpu-device. The results are promising.

Now I'm interested in extending this by incorporating a trained LoRA adapter for inference. I'd like to combine pretrained weights (WOQ) with LoRA adapter (FP32/16) for inference. I'm wondering if it's feasible to achieve this, or if it's on the roadmap for future updates? Any insights or assistance would be greatly appreciated. Thanks!

Yuan0320 avatar Mar 28 '24 05:03 Yuan0320

Hi @Yuan0320 , thanks for using ITREX.

Regarding combine pretrained weights (WOQ) with LoRA adapter (FP32/16) for inference, do you mean to add LoRA adapter (FP32/16) on top of the WOQ model, or just merge the LoRA adapter's weight to the WOQ model, could you please clarify it?

If you meant for the latter case, you can just load the LoRA adapter and merge it to the model before WOQ, and do WOQ after LoRA adapter has been merged into the model, in this way, the model's structure won't change, only its weights are updated.

XinyuYe-Intel avatar Mar 28 '24 07:03 XinyuYe-Intel

Hi @XinyuYe-Intel, thanks for the quick reply and insight, it makes sense. I initially meant the former case, as I want to keep the high precision in adapter to minimize the accuracy loss from WOQ. I think it might be challenging to achieve this (add LoRA adapter (FP32/16) on top of the WOQ model).

Yuan0320 avatar Mar 28 '24 08:03 Yuan0320

Hi @XinyuYe-Intel, thanks for the quick reply and insight, it makes sense. I initially meant the former case, as I want to keep the high precision in adapter to minimize the accuracy loss from WOQ. I think it might be challenging to achieve this (add LoRA adapter (FP32/16) on top of the WOQ model).

No problem at all. And yes, we haven't supported the former case yet.

XinyuYe-Intel avatar Mar 29 '24 02:03 XinyuYe-Intel