LLaMA-LoRA-Tuner
LLaMA-LoRA-Tuner copied to clipboard
Does not work any longer on Google Colab or locally
I managed to run the code and the expected error turned up - but after running it a second time it just kept loading and nothing happened. Have tried now for an hour and same thing.
Regarding running it locally it's possible to run it but it simply doesn't work. Whatever input there is no output and not even an error message.
Yeah I get the same .. error .. on train -> preparing model Asking to pad but the tokenizer does not have a padding token. Please select a token to use as 'pad_token' '(tokenizer.pad_token = tokenizer.eos_token e.g.)' or add a new pad token via 'tokenizer.add_special_tokens({'pad_token': '[PAD]'})'. Or get --> Target modules ['q_proj', 'v_proj'] not found in the base model. Please check the target modules and try again.
tried lots of models - but never loads was working a week ago .. maybe something colab did
Solved here: https://github.com/zetavg/LLaMA-LoRA-Tuner/discussions/29#discussioncomment-6050831