unsloth icon indicating copy to clipboard operation
unsloth copied to clipboard

Does unsloth have a script for full parameter fine-tuning?

Open luoruijie opened this issue 1 year ago • 12 comments

luoruijie avatar Jun 26 '24 03:06 luoruijie

up

msciancalepore98 avatar Jun 28 '24 08:06 msciancalepore98

Apologies on the delay - just relocated to SF, so just got to this - currently Unsloth does not support full finetuning sorry - maybe in a future release.

To mimic FFT, train on the lm_head, embed_tokens, and increase the lora rank to say 256

danielhanchen avatar Jul 01 '24 00:07 danielhanchen

thank you very much .

luoruijie avatar Jul 09 '24 02:07 luoruijie

Uh silly question but @danielhanchen are you sure that Unsloth isn't already supporting full finetuning by default?

If you delete the model = FastLanguageModel.get_peft_model(...) step from one of the examples, you're still left with FastLanguageModel which is responsive to SFTTrainer & produces no explicit errors.

c1b avatar Aug 09 '24 09:08 c1b

Uh silly question but @danielhanchen are you sure that Unsloth isn't already supporting full finetuning by default?

If you delete the model = FastLanguageModel.get_peft_model(...) step from one of the examples, you're still left with FastLanguageModel which is responsive to SFTTrainer & produces no explicit errors.

Yes, that would work. But that means you are losing most or all of the Unsloth optimisations I think.

mahiatlinux avatar Aug 09 '24 09:08 mahiatlinux

Actually almost all of the kernels that affect Llama get applied during .from_pretrained(...) call, see here:

https://github.com/unslothai/unsloth/blob/e4c8ceacb3fca634f78e662873a01c37678fcb3e/unsloth/models/llama.py#L1224-L1253

c1b avatar Aug 09 '24 10:08 c1b

Oh it works, but the RMS Layernorm weights are not trained - I think actually that's the only issue. I would comment out the fast_rms_layernorm calls, and in theory Unsloth can do full finetuning, albeit much less optimized as @mahiatlinux mentioned. But I guess there are some VRAM savings via our Unsloth gradient checkpointing, and the CE Loss kernel works, so some benefits

danielhanchen avatar Aug 10 '24 03:08 danielhanchen

Curious, why doesn't it support full fine-tuning?

brando90 avatar Aug 27 '24 20:08 brando90

Uh silly question but @danielhanchen are you sure that Unsloth isn't already supporting full finetuning by default?

If you delete the model = FastLanguageModel.get_peft_model(...) step from one of the examples, you're still left with FastLanguageModel which is responsive to SFTTrainer & produces no explicit errors.

i try it on Qwen2, it would produce error when save model: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'model.embed_tokens.weight', 'lm_head.weight'}]

i try another way to save, but it generate incorrect token which could not read

any suggestions?

xudou3 avatar Sep 06 '24 02:09 xudou3

Would love to get an update for this as well. :)

dfilimon avatar Sep 10 '24 10:09 dfilimon

Currently not yet - but we'll support full finetuning in a future release sorry

danielhanchen avatar Sep 14 '24 08:09 danielhanchen

Estimated time?

On Sat, Sep 14, 2024, 1:36 AM Daniel Han @.***> wrote:

Currently not yet - but we'll support full finetuning in a future release sorry

— Reply to this email directly, view it on GitHub https://github.com/unslothai/unsloth/issues/689#issuecomment-2350914180, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAOE6LRNIFGC6ZVKD2MKAKTZWPYPRAVCNFSM6AAAAABJ5A6FUSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGNJQHEYTIMJYGA . You are receiving this because you commented.Message ID: @.***>

brando90 avatar Sep 15 '24 00:09 brando90

Hi guys, apologies for the delays - every model in existence (transformer style) are now supported! :)

Read our blogpost about it: https://unsloth.ai/blog/gemma3#everything

Preliminary support for full-finetuning and 8bit finetuning - set full_finetuning = True or load_in_8bit = True Both will be optimized further in the future! A reminder you will need more powerful GPUs!

Also multiGPU is coming real soon so be on the lookout!!

CC: @luoruijie @msciancalepore98 @brando90 @jyweky @samuelazran @danielhanchen @c1b @mahiatlinux @wheynelau @mishaobu @celsowm​

shimmyshimmer avatar Mar 15 '25 11:03 shimmyshimmer