mlx-examples icon indicating copy to clipboard operation
mlx-examples copied to clipboard

[Feature Request] Support for QDoRA: Efficient quantized fine-tuning

Open s-smits opened this issue 1 year ago • 2 comments

Today we’re releasing the next step: QDoRA. This is just as memory efficient and scalable as FSDP/QLoRA, and critically is also as accurate for continued pre-training as full weight training. We think that this is likely to be the best way for most people to train1 language models. We’ve ran preliminary experiments on Llama 2, and completed some initial ones on Llama 3. The results are extremely promising.

https://www.answer.ai/posts/2024-04-26-fsdp-qdora-llama3.html

image

image

Impelementation Answer.AI: https://github.com/AnswerDotAI/fsdp_qlora/pull/51

Caveats (from HF implementation):

  • Only linear and DoRA only supports linear and Conv2d layers at the momement.
  • DoRA introduces a bigger overhead than pure LoRA, so it is recommended to merge weights for inference.
  • DoRA should work with weights quantized with bitsandbytes (“QDoRA”). However, issues have been reported when using QDoRA with DeepSpeed Zero2.

Would be great to use this method to optimize cost-efficient and assumably even better fine-tuning!

s-smits avatar Apr 23 '24 17:04 s-smits

Official PyTorch implementation of DoRA: https://github.com/NVlabs/DoRA

cmhungsteve avatar Apr 30 '24 01:04 cmhungsteve

https://github.com/ml-explore/mlx-examples/pull/891

zaithottakath avatar Jul 16 '24 05:07 zaithottakath

Please close this one, as it's already fixed per #891 as @zaithottakath points out.

fredrik-smedberg avatar Dec 28 '24 12:12 fredrik-smedberg