OpenLLM
OpenLLM copied to clipboard
How to fine-tune a base model and what does adapter id mean?
trafficstars
For example, what's aarnphm/opt-6-7b-quotes here?
openllm start opt --model-id facebook/opt-6.7b --adapter-id aarnphm/opt-6-7b-quotes
This is the LoRA Adapters you can find on HuggingFace hub
--adapter-id allows you to serve and merge multiple LoRA weights into the base model. We decided internally that having a simplified fine-tune API is probably not the best UX for now, since you can pretty much find and modify scripts online to do fine-tuning based on your model of choice and your own datasets.
Or at least on my end I dont' see a lot of value add for having a LLM.tune API for now.