FATE-LLM
FATE-LLM copied to clipboard
Federated Learning for LLMs.
FATE-LLM
FATE-LLM is a framework to support federated learning for large language models(LLMs).

Design Principle
- Federated learning for heterogeneous large model and small model.
- Promote training efficiency of federated LLMs using Parameter-Efficient methods.
- Protect the IP of LLMs using FedIPR.
- Protect data privacy during training and inference through privacy preserving mechanisms.

Deployment
Standalone deployment
Please refer to FATE-Standalone deployment.
- To deploy FATE-LLM v2.0, deploy FATE-Standalone with version >= 2.1, then make a new directory
{fate_install}/fate_llm
and clone the code into it, install the python requirements, and add{fate_install}/fate_llm/python
toPYTHONPATH
- To deploy FATE-LLM v1.x, deploy FATE-Standalone with 1.11.3 <= version < 2.0, then copy directory
python/fate_llm
to{fate_install}/fate/python/fate_llm
Cluster deployment
Use FATE-LLM deployment packages to deploy, refer to FATE-Cluster deployment for more deployment details.
Quick Start
- Federated ChatGLM3-6B Training
- Builtin Models In PELLM
- Offsite Tuning Tutorial
- FedKSeed