Federated-Learning-PyTorch icon indicating copy to clipboard operation
Federated-Learning-PyTorch copied to clipboard

Parallel computing support

Open JackingChen opened this issue 2 years ago • 2 comments

Hi thanks for providing this wonderful repository, but I'm wondering if there will be support for parallelization of client training in each round

specifically, making the local update in federated_main.py to be executed by parallel processes

for idx in idxs_users:
            local_model = LocalUpdate(args=args, dataset=train_dataset,
                                      idxs=user_groups[idx], logger=logger)
            w, loss = local_model.update_weights(
                model=copy.deepcopy(global_model), global_round=epoch)
            local_weights.append(copy.deepcopy(w))
            local_losses.append(copy.deepcopy(loss))

or

are there suggestions for start working on this approach?

JackingChen avatar Apr 13 '23 09:04 JackingChen

parallelization of client training in each round

I also have the same question. Thanks.

saigontrade88 avatar Jul 07 '23 18:07 saigontrade88

It's not possible now, but you can use threading to do it

Xiaoni-61 avatar Nov 01 '23 02:11 Xiaoni-61