Ditto P S
Ditto P S
While weight conversion of llama-13b getting this error: RuntimeError: Internal: unk is not defined.
I was able to resolve it by replacing `tokenizer.model `with one from hugging face. Thank you/
The same issue for me on LLaMa 7B, the loss was not reducing. I have used LR=1e-6, and EPS=1e-3 with 8600 steps.
A dockerfile is really needed to clear the env-related issues. Thanks for initiating this. I have tried running this on my machine, but I'm getting this error after running docker.
@AngainorDev Thank you. That fixed the problem. The issue is with trying to use all gpus.
I'm getting this error ``` Traceback (most recent call last): File "inference.py", line 26, in model = PeftModel.from_pretrained(model, LORA_WEIGHTS, torch_dtype=torch.float16, device_map={'': 0}) File "/root/bud-llms-backend/server/.env/lib/python3.8/site-packages/peft/peft_model.py", line 135, in from_pretrained config =...
I'm facing similar issue `Expected q_dtype == torch::kFloat16 || ((is_sm8x || is_sm90) && q_dtype == torch::kBFloat16) to be true, but got false`
I'm also facing the same issue
I have tested this version of the branch in 4 * A100 80GB. The training is happening, but I'm getting OOM while saving the checkpoint.
I got the same error for LLaMa2 70B
@kaiwang13 Could you please share how you resolved the issue?