LLaVA-pp
LLaVA-pp copied to clipboard
Finetuning with lora output never ends.
Hi, Thanks for your wonderful work.
I am struggling using my lora tuned model.
I conducted following steps
- finetuning with lora
- Undi95/Meta-Llama-3-8B-Instruct-hf model base
- llama3 template
- inference with gradio
- run server with model-base Undi95/Meta-Llama-3-8B-Instruct-hf model-path checkpoints/LLaVA-Meta-Llama-3-8B-Instruct-lora
- Model output never ends. (I think something's wrong with EOS token?)