Chaoyang He

Results 162 comments of Chaoyang He

@zhangzhen-8965 we are working on customizing DNN model under VFL. When do you need it?

@zhangzhen-8965 At that time, it should be ready for you to use.

@canhongpoxiao try this one: https://github.com/FedML-AI/FedML/tree/master/python/examples/cross_silo/grpc_fedavg_mnist_lr_example

@alex-liang-kh please have a look.

BytePS is for data center-based distributed training, while FedML (e.g., FedAvg) is edge-based distributed training. The particular assumptions of FL include: 1. heterogeneous data distribution cross devices (non-I.I.D.) 2. resource...

@wizard1203 Thanks for your suggestion. As for acceleration, FedML is the only research-oriented FL framework that supports cross-machine multiple GPU distributed training. To further accelerate, we can definitely use many...

> FedML supports multiple parameter servers for the communication efficiency via hierarchical FL and decentralized FL . > In hierarchical FL, there are group parameter servers that split the total...

> @chaoyanghe Thanks for your detailed explanation. Maybe I can try to complete it by myself, and when I finish it I would like to push it to your master...

@wizard1203 Do you mean modifying based on this code? https://github.com/FedML-AI/FedML/tree/master/fedml_experiments/distributed/fedavg

@BESTTOOLBOX we do support deploying the server aggregator in self-hosted environment. Please check the CLI command at: https://doc.fedml.ai/mlops/api.html ``` # login as edge server with local pip mode: fedml login...