Angainor Development
Angainor Development
Looks like you gave a wrong parameter for base_model. https://github.com/tloen/alpaca-lora#training-finetunepy How did you invoke finetune.py ?
It's hard to say anything without more context.
Yep, there have been changes to peft recently https://github.com/huggingface/peft/issues/285
Looks a good angle to me. - I would train over alpaca, not bare llama, so it has learned more generic stuff already and may be faster to train (you...
Looks like you didn't feed a llama model as base_model. What was the full command you used?
Did you invoke with torchrun?
All hyper params have to match your dataset size. The default params, including warmup, are tuned for large datasets (ex, warmup 100 steps) If you use a dataset of a...
You have more stuff there than in default requirements https://github.com/tloen/alpaca-lora/blob/main/requirements.txt including two different bitsand bytes bitsandbytes==0.37.0 bitsandbytes-cuda117==0.26.0.post2 What specific pip module did not install with " Could not find a...
Definitively not (until someone finds a magic trick, who knows, but that looks a lot like infinite compression)
I don't think there is a definitive answer to that. You could either add your instructions to alpaca and train with the extended dataset, or train with your dataset only,...