LLaMA-Factory
LLaMA-Factory copied to clipboard
How to use APIs for fine-tuning?
Reminder
- [X] I have read the README and searched the existing issues.
Reproduction
Running api_demo.py did not find any API that could fine_tune
Expected behavior
No response
System Info
No response
Others
No response
Running api_demo.py did not find any api that could fine_tune
This API is the serving endpoint for inference, rather than for finetuing. If you want to fine-tune, one example could be found at examples/lora_single_gpu or other examples. In the README, there are also two official examples:
- Colab: https://colab.research.google.com/drive/1eRTPn37ltBbYsISy9Aw2NuI2Aq5CQrD9?usp=sharing
- Local machine: Please refer to usage
This API is the serving endpoint for inference, rather than for finetuing. If you want to fine-tune, one example could be found at examples/lora_single_gpu or other examples. In the README, there are also two official examples:
- Colab: https://colab.research.google.com/drive/1eRTPn37ltBbYsISy9Aw2NuI2Aq5CQrD9?usp=sharing
- Local machine: Please refer to usage
Thank you for your response. I would like to know how I can use APIs to fine tune models for others, rather than using command lines or web pages
It's not supported yet now. You can develop a high-level API based on your needs, such as using FastAPI, to call the local interfaces of this project to meet your requirements.
It's not supported yet now. You can develop a high-level API based on your needs, such as using FastAPI, to call the local interfaces of this project to meet your requirements.
Ok, thanks