FedLab-benchmarks
FedLab-benchmarks copied to clipboard
Standard federated learning implementations in FedLab and FL benchmarks.
FedLab-benchmarks
This repo contains standard FL algorithm implementations and FL benchmarks using FedLab.
Currently, following algorithms or benchrmarks are available:
Optimization Algorithms
- [x] FedAvg: Communication-Efficient Learning of Deep Networks from Decentralized Data
- [x] FedAsync: Asynchronous Federated Optimization
- [x] FedProx: Federated Optimization in Heterogeneous Networks
- [x] FedDyn: Federated Learning based on Dynamic Regularization
- [x] Personalized-FedAvg: Improving Federated Learning Personalization via Model Agnostic Meta Learning
- [x] qFFL: Fair Resource Allocation in Federated Learning
- [x] FedMGDA+: Federated Learning meets Multi-objective Optimization
Compression Algorithms
- [x] DGC: Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training
- [x] QSGD: Communication-Efficient SGD via Gradient Quantization and Encoding
Datasets
- [x] LEAF: A Benchmark for Federated Settings
- [x] NIID-Bench: Federated Learning on Non-IID Data Silos: An Experimental Study
More reproductions of FL algorithms and settings are coming. And we welcome you to contribute federated learning algorithm based on FedLab. If you encounter any problems, do not hesitate to submit an issue or send an email to repo maintainers.