FedML icon indicating copy to clipboard operation
FedML copied to clipboard

What should I do if I only have a GPUNode with 8 graphics card to run the distributed algorithm?

Open iuserea opened this issue 4 years ago • 17 comments

Could anyone please tell me what should I do if I only have a GPUNode functioning as a login node and a compute node at the same time to run the distributed algorithm, please?
It works pretty well when running fedavg algorithm under fedml's distributed architecture based on our hardware architecture saying above.
But when I try to run fednas,fedgkt,fedavg_robust algorithm,they'll fail in the same reason in the end as showing in the screenshot below.

FedNAS: 7WEZ6@35~YD3JUEY((RZT FedGKT: BSD(QJ8XL$L~8J(CS1303VK

iuserea avatar Nov 29 '20 08:11 iuserea

@iuserea I just run our code using 4 and 8 compute nodes, it works well. The issue you mentioned happens when you have only 1 compute node but do not change the compute topology. To address this issue, you can try to modify the computing topology at init_training_device() function, in which you can force all workers/clients run in the same GPU device ID (since many clients/workers share the same GPU device, you may also need to make the batch size smaller to fit the memory constraints, which may degrades the accuracy a little bit and leads to a relatively slow training speed). Besides, you should also change the client_number/worker_number at run_xxx.sh

chaoyanghe avatar Nov 29 '20 17:11 chaoyanghe

@chaoyanghe Could fedgkt algorithm be running with only 2 clients or even one client?I tried but I failed.

iuserea avatar Nov 30 '20 02:11 iuserea

run_fedgkt.sh: image

iuserea avatar Nov 30 '20 03:11 iuserea

main_fedgkt.py: image

iuserea avatar Nov 30 '20 03:11 iuserea

CMD for 10 clients: sh run_FedGKT.sh 8 cifar10 homo 10 10 1 Adam 0.001 1 0 resnet56 fedml_resnet56_homo_cifar10 "./../../../data/cifar10" 64 10

iuserea avatar Nov 30 '20 03:11 iuserea

The success flag I found is that b_all_received = True. When the process failed, it either failed in b_all_received = false or 'b_all_received' variable even didn't appear after all the clients training has finished. image However it's just the surface of the real problem.

iuserea avatar Nov 30 '20 03:11 iuserea

When I set client/worker's number to 2,fedgkt algorithm will also create 8 processes which may result in the failure of itself. image

iuserea avatar Nov 30 '20 08:11 iuserea

Change your sh script: -n is still 9

chaoyanghe avatar Nov 30 '20 16:11 chaoyanghe

@chaoyanghe
The -n option is not essential for training of two clients. image The quesion is when training the two clients,the message below didn't appear. handle_message_receive_feature_and_logits_from_client add_model. index = 7

iuserea avatar Dec 02 '20 08:12 iuserea

Hi @iuserea We have supported GPU mapping, please have a look at this: https://github.com/FedML-AI/FedML/blob/966a36db96d8987b27ef2203034b1cd92b5cd40c/fedml_experiments/distributed/fedavg/main_fedavg.py#L308

chaoyanghe avatar Jan 09 '21 03:01 chaoyanghe

Hi @iuserea , Could you please share your configuration of both software and hardware?

rambo-coder avatar Apr 29 '21 09:04 rambo-coder

I only have two gpus in one server, how can I train fedgdk?

rambo-coder avatar Apr 29 '21 09:04 rambo-coder

@iuserea Hi, How do you set your mpi_host_file?

rambo-coder avatar Apr 29 '21 09:04 rambo-coder

image

rambo-coder avatar Apr 29 '21 11:04 rambo-coder

image

rambo-coder avatar Apr 29 '21 13:04 rambo-coder

@rambo-coder @iuserea @chaoyanghe Did you guys figure out how to run fedgkt on a single machine with multiple gpus? I followed the thread but was not able to make it finish with success (b_all_received=False).

@chaoyanghe Most researchers has a single machine with multiple gpus. It would be nice to have a guide for this especially if the library is designed specifically for researchers.

korawat-tanwisuth avatar Aug 26 '21 01:08 korawat-tanwisuth

@rambo-coder @iuserea @chaoyanghe Did you guys figure out how to run fedgkt on a single machine with multiple gpus? I followed the thread but was not able to make it finish with success (b_all_received=False).

@chaoyanghe Most researchers has a single machine with multiple gpus. It would be nice to have a guide for this especially if the library is designed specifically for researchers.

Hello @korawat-tanwisuth Did you run FedGKT on a single machine with multiple gpus?

SahadevPoudel avatar Nov 22 '21 10:11 SahadevPoudel