alpaca-lora icon indicating copy to clipboard operation
alpaca-lora copied to clipboard

Can anyone post already trained model?

Open HCBlackFox opened this issue 1 year ago • 10 comments

HCBlackFox avatar Mar 17 '23 20:03 HCBlackFox

Hello, you can find this 13B one here: https://huggingface.co/samwit/alpaca13B-lora

Otherwise, there is the 7B one here: https://huggingface.co/tloen/alpaca-lora-7b

Please note these are LoRA models they need the base model to work.

And here is the base model for the 7B: https://huggingface.co/decapoda-research/llama-7b-hf

collant avatar Mar 17 '23 20:03 collant

Thank you

Hello, you can find this 13B one here: https://huggingface.co/samwit/alpaca13B-lora

Otherwise, there is the 7B one here: https://huggingface.co/tloen/alpaca-lora-7b

Please note these are LoRA models they need the base model to work.

And here is the base model for the 7B: https://huggingface.co/decapoda-research/llama-7b-hf

Thank you

HCBlackFox avatar Mar 17 '23 20:03 HCBlackFox

Is there a 30B-4bit lora out there? I think I read somewhere that finetuning in 4bit might not be supported?

0xbitches avatar Mar 18 '23 07:03 0xbitches

Hello, you can find this 13B one here: https://huggingface.co/samwit/alpaca13B-lora

Otherwise, there is the 7B one here: https://huggingface.co/tloen/alpaca-lora-7b

Please note these are LoRA models they need the base model to work.

And here is the base model for the 7B: https://huggingface.co/decapoda-research/llama-7b-hf

can the original LLaMA-7B weights (consolidated.00.pth) be used? or can I convert it to hf?

ttio2tech avatar Mar 18 '23 13:03 ttio2tech

Any links for models trained w/3-epochs on the new cleaned dataset?

gururise avatar Mar 18 '23 17:03 gururise

Any links for models trained w/3-epochs on the new cleaned dataset?

I just finished training this 13B one but haven't got it to work yet (I'm using multiple GPUs so maybe that's the issue) https://huggingface.co/mattreid/alpaca-lora-13b

mattreid1 avatar Mar 18 '23 17:03 mattreid1

@collant can you help me understand how can I load the Lora model trained with the 52k dataset and use it to train on another data.json?

In finetune.py I can find the loading of the llama 7b model

model = LlamaForCausalLM.from_pretrained(
    "decapoda-research/llama-7b-hf",
    load_in_8bit=True,
    device_map=device_map,
)
tokenizer = LlamaTokenizer.from_pretrained(
    "decapoda-research/llama-7b-hf", add_eos_token=True
)

and after the lora config obj is being created

config = LoraConfig(
    r=LORA_R,
    lora_alpha=LORA_ALPHA,
    target_modules=TARGET_MODULES,
    lora_dropout=LORA_DROPOUT,
    bias="none",
    task_type="CAUSAL_LM",
)
model = get_peft_model(model, config)

does loading the Lora model from hf involves calling another function and loading that checkpoint? I can see that there is a save_pretrained function, maybe I need to load the Lora model via this? Sorry if this sounds confusing

edit: after a little bit more google I found this load_attn_procs function, maybe it's something around here

edit2: it seems that it was inside generate.py all along

    model = LlamaForCausalLM.from_pretrained(
        "decapoda-research/llama-7b-hf",
        load_in_8bit=True,
        torch_dtype=torch.float16,
        device_map="auto",
    )
    model = PeftModel.from_pretrained(
        model, "tloen/alpaca-lora-7b",
        torch_dtype=torch.float16
    )

felri avatar Mar 18 '23 19:03 felri

30B LoRa adapters here https://huggingface.co/baseten/alpaca-30b

aspctu avatar Mar 19 '23 23:03 aspctu

@collant can you help me understand how can I load the Lora model trained with the 52k dataset and use it to train on another data.json?

In finetune.py I can find the loading of the llama 7b model

model = LlamaForCausalLM.from_pretrained(
    "decapoda-research/llama-7b-hf",
    load_in_8bit=True,
    device_map=device_map,
)
tokenizer = LlamaTokenizer.from_pretrained(
    "decapoda-research/llama-7b-hf", add_eos_token=True
)

and after the lora config obj is being created

config = LoraConfig(
    r=LORA_R,
    lora_alpha=LORA_ALPHA,
    target_modules=TARGET_MODULES,
    lora_dropout=LORA_DROPOUT,
    bias="none",
    task_type="CAUSAL_LM",
)
model = get_peft_model(model, config)

does loading the Lora model from hf involves calling another function and loading that checkpoint? I can see that there is a save_pretrained function, maybe I need to load the Lora model via this? Sorry if this sounds confusing

edit: after a little bit more google I found this load_attn_procs function, maybe it's something around here

edit2: it seems that it was inside generate.py all along

    model = LlamaForCausalLM.from_pretrained(
        "decapoda-research/llama-7b-hf",
        load_in_8bit=True,
        torch_dtype=torch.float16,
        device_map="auto",
    )
    model = PeftModel.from_pretrained(
        model, "tloen/alpaca-lora-7b",
        torch_dtype=torch.float16
    )

Have you found solution? #44 I found this may help? But I still confuse with what <PATH> is

T-Atlas avatar Mar 20 '23 08:03 T-Atlas

Any links for models trained w/3-epochs on the new cleaned dataset?

+1

diegolondrina avatar Mar 20 '23 09:03 diegolondrina

Please, report @larasatistevany for spamming.

https://support.github.com/contact/report-abuse?category=report-abuse&report=larasatistevany

-> I want to report abusive content or behavior. -> I want to report SPAM, a user that is disrupting me or my organization's experience on GitHub, or a user who is using my personal information without my permission -> A user is disrupting me or my organization's experience and productivity by posting SPAM off-topic or other types of disruptive content in projects they do not own.

Put this in the form:

spamming in issue comments
https://github.com/tloen/alpaca-lora/issues/52#issuecomment-1570561693
https://github.com/tloen/alpaca-lora/issues/52#issuecomment-1571059071

Thanks!

wafflecomposite avatar Jun 02 '23 23:06 wafflecomposite