Randy H
Randy H
> Y'all: This shouldn't be difficult. I finetuned the 30B 8-bit Llama with Alpaca Lora in about 26 hours on a couple of 3090's with good results. The 65B model...
> @RandyHaylor 4-bit lora training is currently only in this repo https://github.com/johnsmith0031/alpaca_lora_4bit afaik > > I'm interested in doing this myself, too. Will have to monitor the temperatures of the...
> > @RandyHaylor 4-bit lora training is currently only in this repo https://github.com/johnsmith0031/alpaca_lora_4bit afaik > > I'm interested in doing this myself, too. Will have to monitor the temperatures of...