Angainor Development
Angainor Development
> Did you continue to see the loss go down past 3 epochs? Is your thought that by allowing fine-tuning of all 4 lora modules, 3 epochs is not enough?...
Do you have a gpu? just one?
> i have the same problem. do you solve it Working fix just above https://github.com/tloen/alpaca-lora/issues/230#issuecomment-1492338315
Load the tokenizer from the base model (llama) not from your checkpoint.
As I understand it, it's more a VRAM requirement to fit the model than a GPU capability. LoRA merges its weights into the base model, but does not add nor...
The path stuff you can ignore, but you need a proper bitsandbytes install. The pre-compiled one you have does not have GPU activated. NOt a cuda issue, bitsandbytes . (Can't...
I'd say it's because of Compute capability < 7.5, too old a GPU. From your logs: "using_igemmlt = torch.cuda.get_device_capability(device=A.device) >= (7, 5) and not state.force_no_igemmlt" Or you have to find...
> Is there a good way to increase or decrease the LoRA rank of an existing adapter? Not that I'm aware of. Given the relatively small time it takes to...
> Does rank 16 LoRA use mixed precision? Casting it to cuda seems to fix the problem. Rank is unrelated to model fp precision. generate now has an extra param...
LoRA from the HF you point to is a Rank 16 LoRA, why it's bigger than your 17MB (rank 8, ok file size). (Check the config json file, you have...