backdoor_federated_learning icon indicating copy to clipboard operation
backdoor_federated_learning copied to clipboard

Backdoor accuracy can't reach 100%

Open ybdai7 opened this issue 3 years ago • 1 comments

I have some problems when i am trying to reproduce the experiment results in this paper.

I pull the source code and run it with params.yaml after setting is_poison=True and switching some report loss options to True. After finishing training i found that the test accuracy on backdoor tasks cant reach 100% as it is said in the paper, it usually remains to be around 70% sometimes lower. I am wondering if there is something need to be set particularly or something wrong with my params.yaml. Could someone plz help me?

here are my params.yaml and the final results.

type image
lr 0.1
momentum 0.9
decay 0.0005
batch_size 64
no_models 10
epochs 10100
retrain_no_times 2
number_of_total_participants 100
sampling_dirichlet True
dirichlet_alpha 0.9
eta 1
save_model True
save_on_epochs [10, 100, 500, 1000, 2000, 5000]
resumed_model False
environment_name ppdl_experiment_Jul.13_13.34
report_train_loss True
report_test_loss True
report_poison_loss True
track_distance False
track_clusters False
modify_poison False
poison_type wall
poison_images_test [330, 568, 3934, 12336, 30560]
poison_images [30696, 33105, 33615, 33907, 36848, 40713, 41706]
poison_image_id 2775
poison_image_id_2 1605
poison_label_swap 2
size_of_secret_dataset 200
poisoning_per_batch 1
poison_test_repeat 1000
is_poison True
baseline False
random_compromise False
noise_level 0.01
poison_epochs [10000]
retrain_poison 15
scale_weights 100
poison_lr 0.05
poison_momentum 0.9
poison_decay 0.005
poison_step_lr True
clamp_value 1.0
alpha_loss 1.0
number_of_adversaries 1
poisoned_number 2
results_json False
s_norm 1000000
diff_privacy False
fake_participants_load False
fake_participants_file data/reddit/updates_cifar.pt.tar
fake_participants_save False
current_time Jul.13_14.56.08
adversary_list [0]

image

ybdai7 avatar Jul 16 '22 04:07 ybdai7

Hey, I don't really support the code anymore, so it might not work well. Maybe try to test memorization first, i.e. mix test and train poison_images together.

ebagdasa avatar Jul 16 '22 20:07 ebagdasa