FedML
FedML copied to clipboard
What should I do if I only have a GPUNode with 8 graphics card to run the distributed algorithm?
Could anyone please tell me what should I do if I only have a GPUNode functioning as a login node and a compute node at the same time to run the distributed algorithm, please?
It works pretty well when running fedavg algorithm under fedml's distributed architecture based on our hardware architecture saying above.
But when I try to run fednas,fedgkt,fedavg_robust algorithm,they'll fail in the same reason in the end as showing in the screenshot below.
FedNAS:
FedGKT:
@iuserea I just run our code using 4 and 8 compute nodes, it works well. The issue you mentioned happens when you have only 1 compute node but do not change the compute topology. To address this issue, you can try to modify the computing topology at init_training_device() function, in which you can force all workers/clients run in the same GPU device ID (since many clients/workers share the same GPU device, you may also need to make the batch size smaller to fit the memory constraints, which may degrades the accuracy a little bit and leads to a relatively slow training speed). Besides, you should also change the client_number/worker_number at run_xxx.sh
@chaoyanghe Could fedgkt algorithm be running with only 2 clients or even one client?I tried but I failed.
run_fedgkt.sh:
main_fedgkt.py:
CMD for 10 clients: sh run_FedGKT.sh 8 cifar10 homo 10 10 1 Adam 0.001 1 0 resnet56 fedml_resnet56_homo_cifar10 "./../../../data/cifar10" 64 10
The success flag I found is that b_all_received = True.
When the process failed, it either failed in b_all_received = false or 'b_all_received' variable even didn't appear after all the clients training has finished.
However it's just the surface of the real problem.
When I set client/worker's number to 2,fedgkt algorithm will also create 8 processes which may result in the failure of itself.
Change your sh script: -n is still 9
@chaoyanghe
The -n option is not essential for training of two clients.
The quesion is when training the two clients,the message below didn't appear.
handle_message_receive_feature_and_logits_from_client
add_model. index = 7
Hi @iuserea We have supported GPU mapping, please have a look at this: https://github.com/FedML-AI/FedML/blob/966a36db96d8987b27ef2203034b1cd92b5cd40c/fedml_experiments/distributed/fedavg/main_fedavg.py#L308
Hi @iuserea , Could you please share your configuration of both software and hardware?
I only have two gpus in one server, how can I train fedgdk?
@iuserea Hi, How do you set your mpi_host_file?
@rambo-coder @iuserea @chaoyanghe Did you guys figure out how to run fedgkt on a single machine with multiple gpus? I followed the thread but was not able to make it finish with success (b_all_received=False).
@chaoyanghe Most researchers has a single machine with multiple gpus. It would be nice to have a guide for this especially if the library is designed specifically for researchers.
@rambo-coder @iuserea @chaoyanghe Did you guys figure out how to run fedgkt on a single machine with multiple gpus? I followed the thread but was not able to make it finish with success (b_all_received=False).
@chaoyanghe Most researchers has a single machine with multiple gpus. It would be nice to have a guide for this especially if the library is designed specifically for researchers.
Hello @korawat-tanwisuth Did you run FedGKT on a single machine with multiple gpus?