awesome-llms-fine-tuning icon indicating copy to clipboard operation
awesome-llms-fine-tuning copied to clipboard

Explore a comprehensive collection of resources, tutorials, papers, tools, and best practices for fine-tuning Large Language Models (LLMs). Perfect for ML practitioners and researchers!

Awesome LLMs Fine-Tuning

Welcome to the curated collection of resources for fine-tuning Large Language Models (LLMs) like GPT, BERT, RoBERTa, and their numerous variants! In this era of artificial intelligence, the ability to adapt pre-trained models to specific tasks and domains has become an indispensable skill for researchers, data scientists, and machine learning practitioners.

Large Language Models, trained on massive datasets, capture an extensive range of knowledge and linguistic nuances. However, to unleash their full potential in specific applications, fine-tuning them on targeted datasets is paramount. This process not only enhances the models’ performance but also ensures that they align with the particular context, terminology, and requirements of the task at hand.

In this awesome list, we have meticulously compiled a range of resources, including tutorials, papers, tools, frameworks, and best practices, to aid you in your fine-tuning journey. Whether you are a seasoned practitioner looking to expand your expertise or a beginner eager to step into the world of LLMs, this repository is designed to provide valuable insights and guidelines to streamline your endeavors.

Table of Contents

  • GitHub projects
  • Articles & Blogs
  • Online Courses
  • Books
  • Research Papers
  • Videos
  • Tools & Software
  • Conferences & Events
  • Slides & Presentations
  • Podcasts

GitHub projects

  • LlamaIndex 🦙: A data framework for your LLM applications. (23010 stars)
  • Petals 🌸: Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading. (7768 stars)
  • LLaMA-Factory: An easy-to-use LLM fine-tuning framework (LLaMA-2, BLOOM, Falcon, Baichuan, Qwen, ChatGLM3). (5532 stars)
  • lit-gpt: Hackable implementation of state-of-the-art open-source LLMs based on nanoGPT. Supports flash attention, 4-bit and 8-bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed. (3469 stars)
  • H2O LLM Studio: A framework and no-code GUI for fine-tuning LLMs. Documentation: https://h2oai.github.io/h2o-llmstudio/ (2880 stars)
  • Phoenix: AI Observability & Evaluation - Evaluate, troubleshoot, and fine tune your LLM, CV, and NLP models in a notebook. (1596 stars)
  • LLM-Adapters: Code for the EMNLP 2023 Paper: "LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models". (769 stars)
  • Platypus: Code for fine-tuning Platypus fam LLMs using LoRA. (589 stars)
  • xtuner: A toolkit for efficiently fine-tuning LLM (InternLM, Llama, Baichuan, QWen, ChatGLM2). (540 stars)
  • DB-GPT-Hub: A repository that contains models, datasets, and fine-tuning techniques for DB-GPT, with the purpose of enhancing model performance, especially in Text-to-SQL, and achieved higher exec acc than GPT-4 in spider eval with 13B LLM used this project. (422 stars)
  • LLM-Finetuning-Hub : Repository that contains LLM fine-tuning and deployment scripts along with our research findings. :star: 416
  • Finetune_LLMs : Repo for fine-tuning Casual LLMs. :star: 391
  • MFTCoder : High Accuracy and efficiency multi-task fine-tuning framework for Code LLMs; 业内首个高精度、高效率、多任务、多模型支持、多训练算法,大模型代码能力微调框架. :star: 337
  • llmware : Providing enterprise-grade LLM-based development framework, tools, and fine-tuned models. :star: 289
  • LLM-Kit : 🚀WebUI integrated platform for latest LLMs | 各大语言模型的全流程工具 WebUI 整合包。支持主流大模型API接口和开源模型。支持知识库,数据库,角色扮演,mj文生图,LoRA和全参数微调,数据集制作,live2d等全流程应用工具. :star: 232
  • h2o-wizardlm : Open-Source Implementation of WizardLM to turn documents into Q:A pairs for LLM fine-tuning. :star: 228
  • hcgf : Humanable Chat Generative-model Fine-tuning | LLM微调. :star: 196
  • llm_qlora : Fine-tuning LLMs using QLoRA. :star: 136
  • awesome-llm-human-preference-datasets : A curated list of Human Preference Datasets for LLM fine-tuning, RLHF, and eval. :star: 124
  • llm_finetuning : Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytes). :star: 114

Articles & Blogs

Online Courses

Books

Research Papers

Videos

Tools & Software

  • LLaMA Efficient Tuning 🛠️: Easy-to-use LLM fine-tuning framework (LLaMA-2, BLOOM, Falcon).
  • H2O LLM Studio 🛠️: Framework and no-code GUI for fine-tuning LLMs.
  • PEFT 🛠️: Parameter-Efficient Fine-Tuning (PEFT) methods for efficient adaptation of pre-trained language models to downstream applications.
  • ChatGPT-like model 🛠️: Run a fast ChatGPT-like model locally on your device.
  • Petals: Run large language models like BLOOM-176B collaboratively, allowing you to load a small part of the model and team up with others for inference or fine-tuning. 🌸
  • NVIDIA NeMo: A toolkit for building state-of-the-art conversational AI models and specifically designed for Linux. 🚀
  • H2O LLM Studio: A framework and no-code GUI tool for fine-tuning large language models on Windows. 🎛️
  • Ludwig AI: A low-code framework for building custom LLMs and other deep neural networks. Easily train state-of-the-art LLMs with a declarative YAML configuration file. 🤖
  • bert4torch: An elegant PyTorch implementation of transformers. Load various open-source large model weights for reasoning and fine-tuning. 🔥
  • Alpaca.cpp: Run a fast ChatGPT-like model locally on your device. A combination of the LLaMA foundation model and an open reproduction of Stanford Alpaca for instruction-tuned fine-tuning. 🦙
  • promptfoo: Evaluate and compare LLM outputs, catch regressions, and improve prompts using automatic evaluations and representative user inputs. 📊

Conferences & Events

Slides & Presentations

Podcasts


This initial version of the Awesome List was generated with the help of the Awesome List Generator. It's an open-source Python package that uses the power of GPT models to automatically curate and generate starting points for resource lists related to a specific topic.