zenml-projects
zenml-projects copied to clipboard
LLM Finetune with PEFT
This PR brings in the pipeline to run Mistral model fine-tuning using PEFT library on Viggio dataset.
Key highlights:
- Project has one pipeline to run the full cycle of LLM fine-tuning:
- Data extraction and preparation
- Finetuning base model on new data
- Evaluation of the performances for base and fine-tuned model
- Promotion of the model to the target environment, if it outperforms the base and currently promoted one
- Model can be trained on local(remote) orchestrator in full
- Model can be trained on step operators, where only GPU intense steps will be pushed to step operator (we used Vertex AI step operator)
- Here we also move around the previous LitGPT fine-tune project to
llm-litgpt-finetuning
and this new project becomesllm-lora-finetuning
P.S. Diff is a bit off due to project movement. LitGPT one was lift and shift - no changes in it.
@coderabbitai review
@htahir1 are we ok to merge this? It will replace the current LORA example with PEFT one and LitGPT will be moved to a new directory. Any blockers?
Just one more thing that came to my mind: We should probably also rename the llm-lora-finetuning
template right?