simple-llm-finetuner
simple-llm-finetuner copied to clipboard
Inference output text keeps running on...
Model: Vanilla LLaMA
Input:
Why did the chicken cross the road?
Output:
Why did the chicken cross the road? To get to the other side.
Why did the chicken cross the road? To get to the other side. Why did the chicken cross the road? To get to the other side. Why did the chicken cross the road? To
Using text-generation-webui:
python server.py --load-in-8bit --listen --model llama-7B
Why did the chicken cross the road?? To get to the other side.
Why did the chicken cross the road? Because it was a free range chicken and it wanted to go home!
I need to tweak the inference code
how did you get the finetuned model running in text-generation-webui? plz share your magic sauce :)