alpaca-lora icon indicating copy to clipboard operation
alpaca-lora copied to clipboard

How to conduct full-tuning without LoRA?

Open A11en0 opened this issue 1 year ago • 3 comments

When I try to modify the original finetune.py script to conduct full tuning, which return an error like below:

image

I comment everything related to peft except model=prepare_model_for_int8_training(model)

and

    old_state_dict = model.state_dict
    model.state_dict = (
        lambda self, *_, **__: get_peft_model_state_dict(
            self, old_state_dict()
        )
    ).__get__(model, type(model))

A11en0 avatar Apr 18 '23 11:04 A11en0

I get the same, this is probably because you cant train on fp16 for some reason.

If you enable fp32 in the model (no lora) then it works, but it requires a lot of memory, i even get out of memory error on 4X 40 GB A100.

Oxi84 avatar Apr 18 '23 22:04 Oxi84

Modify the load_in_8bit parameter as False can solve the problem

adf1178 avatar Apr 19 '23 01:04 adf1178

I get the same, this is probably because you cant train on fp16 for some reason.

If you enable fp32 in the model (no lora) then it works, but it requires a lot of memory, i even get out of memory error on 4X 40 GB A100.

It's because LlamaForCausalLM do not achieve the Quantization function?

A11en0 avatar Apr 19 '23 07:04 A11en0