ARGD icon indicating copy to clipboard operation
ARGD copied to clipboard

This is an implementation demo of the IJCAI 2022 paper [Eliminating Backdoor Triggers for Deep Neural Networks Using Attention Relation Graph Distillation](https://arxiv.org/abs/2204.09975) in PyTorc...

Attention Relation Graph Distillation

Python 3.6 Pytorch 1.10 CUDA 10.0 License CC BY-NC

We have already uploaded the all2one pretrained backdoor student model(i.e. gridTrigger WRN-16-1, target label 0) and the clean teacher model(i.e. WRN-16-1) in the path of ./weight/s_net and ./weight/t_net respectively.

For evaluating the performance of ARGD, you can easily run command:

$ python main-ARGD.py 

where the default parameters are shown in config.py.

The trained model will be saved at the path weight/erasing_net/<s_name>.tar

Please carefully read the main.py and configs.py, then change the parameters for your experiment.

Erasing Results on BadNets under 5% clean data ratio

Dataset Baseline ACC Baseline ASR ARGD ACC ARGD ASR
CIFAR-10 80.08 100.0 79.81 2.10

Training your own backdoored model

We have provided a DatasetBD Class in data_loader.py for generating training set of different backdoor attacks.

For implementing backdoor attack(e.g. GridTrigger attack), you can run the below command:

$ python train_badnet.py 

This command will train the backdoored model and print clean accuracies and attack rate. You can also select the other backdoor triggers reported in the paper.

Please carefully read the train_badnet.py and configs.py, then change the parameters for your experiment.

How to get teacher model?

we obtained the teacher model by finetuning all layers of the backdoored model using 5% clean data with data augmentation techniques. In our paper, we only finetuning the backdoored model for 5~10 epochs. Please check more details of our experimental settings in section 4.1; The finetuning code is easy to get by just use the cls_loss to train it, which means the distillation loss to be zero in the training process.

Other source of backdoor attacks

Attack

CL: Clean-label backdoor attacks

SIG: A New Backdoor Attack in CNNS by Training Set Corruption Without Label Poisoning

Refool: Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks

Defense

MCR: Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness

**Fine-tuning **: Defending Against Backdooring Attacks on Deep Neural Networks

**Neural Attention Distillation **: Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks

Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks

STRIP: A Defence Against Trojan Attacks on Deep Neural Networks

Library

Note: TrojanZoo provides a universal pytorch platform to conduct security researches (especially backdoor attacks/defenses) of image classification in deep learning.

Backdoors 101 — is a PyTorch framework for state-of-the-art backdoor defenses and attacks on deep learning models.

References

If you find this code is useful for your research, please cite our paper.

Contacts

If you have any questions, leave a message below with GitHub.