LLM-Finetuning-Toolkit
LLM-Finetuning-Toolkit copied to clipboard
example config file to run inference only on fine-tuned model
Is it possible to provide a config file that shows how to run inference on an already fine-tuned model?
I have run the starter config, and it looks like the final PEFT model weights are in experiment/XXX/weights/.
So how do I re-run inference only (and possibly qa checks) on that model?
This is not yet implemented in the CLI. I opened up a feature request to address this. https://github.com/georgian-io/LLM-Finetuning-Toolkit/issues/161
Meanwhile, here's a notebook you can use for inference on custom dataset. Just unzip and drop into project folder. The code is really simple and can be easily adapted into a script. Note: this probably won't work with pipx
installation and requires pip
.
Let me know if you have any issues running this! 😀
Closed due to inactivity. Please let us know if you require further assistance.
Please refer to issue https://github.com/georgian-io/LLM-Finetuning-Toolkit/issues/161 for any update for this feature request.