jhcknzzm
jhcknzzm
Actually transforms. RandomCrop() is only used for data augmentation, and in the edge case ("Attack of the Tails: Yes, You Really Can Backdoor Federated Learning") the trigger is actually out-of-distribution...
OK, get it
Hello, thank you for reading our code carefully. Of course, as a machine learning task, generalization ability is still very important. We updated our code to clear up confusion. Please...
For you mentioned EMNIST task, in the base case the attacker's goal is to get the final model to misclassify certain datapoints, and their training dataset is the same as...
Yeah, thanks. For EMNIST task in the edge case, the training data is not the same as the test data, we will highlight this when we update our paper, we...
 For EMNITS dataset we also did the following additional experiment: in the experiment, the poisoned data is the pictures of the number 7 of the Ardis dataset (the dataset...
Hi, I think you could run main_training.py, instead of FL_Backdoor.py. You can find examples of executing the code in the file run_backdoor_cv_task.sh.
This is weird because in the code about the CV task, we did not add DP, in the paper our results about DP are mainly focused on the Reddit dataset...
Sorry, you can't remove --diff_privacy True, because in fact, the server will perform defense (gradient norm clipping) after setting --diff_privacy True in our code for CV tasks. If you remove...
I think it is gradmask_ratio==0.95. This setting does work fine on my machine, but maybe you can try it with a different gradmask_ratio, because maybe there is some randomness in...