Federated-Learning-Backdoor
Federated-Learning-Backdoor copied to clipboard
to sure the trigger logic
i do not understand why the train_data need to randomcrop after padding 4 use "transforms.RandomCrop(32, padding=4)" At the beginning, i thought that maybe for trigger setting,but i found that you make trigger like "Attack of the Tails: Yes, You Really Can Backdoor Federated Learning" If it's the same as I thought,the poison data change it's tag to index[9] and flip as trigger, i can understand about poison data ,but why benign data need to padding and flip?Final,will a better Hidden trigger may improve the lifespan?
Actually transforms. RandomCrop() is only used for data augmentation, and in the edge case ("Attack of the Tails: Yes, You Really Can Backdoor Federated Learning") the trigger is actually out-of-distribution data. For example, for the MNIST dataset, the trigger is a number in the other dataset, ARDIS dataset. I envision that a trigger that differs significantly from the distribution of benign data might increase Lifespan, but the conclusion may be the opposite of what I thought, because if the trigger is very different from benign data, backdoor may be easy to be removed when fine-tuning the model with benign data. I don't understand what you mean by hidden triggers.
hidden triggers,i mean,triggers which hid perturbation into dataset not use dataset with wrong table or out-of-distribution data as triggers
OK, get it