Federated-Learning-Backdoor
Federated-Learning-Backdoor copied to clipboard
A question about the train time
Hi, I'm trying to reproduce the experiments for Cifar-10. When I run the command python main_training.py --run_slurm 0 --GPU_id 0 --start_epoch 1 --attack_num 250 --gradmask_ratio 1.0 --edge_case 0
on Windows OS, the training time of one epoch is close to 8min, and the 1800 epochs train round seems to be unaceptable. Maybe I have some wrong setting?
The version of pytorch is 2.1.0, and GPU is NVIDIA GeForce RTX 2060.