Federated-Learning-PyTorch
Federated-Learning-PyTorch copied to clipboard
Parallel computing support
Hi thanks for providing this wonderful repository, but I'm wondering if there will be support for parallelization of client training in each round
specifically, making the local update in federated_main.py to be executed by parallel processes
for idx in idxs_users:
local_model = LocalUpdate(args=args, dataset=train_dataset,
idxs=user_groups[idx], logger=logger)
w, loss = local_model.update_weights(
model=copy.deepcopy(global_model), global_round=epoch)
local_weights.append(copy.deepcopy(w))
local_losses.append(copy.deepcopy(loss))
or
are there suggestions for start working on this approach?
parallelization of client training in each round
I also have the same question. Thanks.
It's not possible now, but you can use threading to do it