adversarial-robustness-toolbox icon indicating copy to clipboard operation
adversarial-robustness-toolbox copied to clipboard

Create backdoor-clean-label

Open OrsonTyphanel93 opened this issue 1 year ago • 0 comments

Description

This code transforms the audios dirty label backdoor attack into a truly robust clean label attack!,

Please include a summary of the change, motivation and which issue is fixed. Any dependencies changes should also be included.

Fixes # (issue)

Type of change

This class implements a clean label attack, in particular for poisoning attacks with clean labels. The main contributions of this are as follows:

Robust clean label backdoor attack !

Please check all relevant options.

Test Configuration:

  • OS
  • Python version
  • ART version or commit number
  • TensorFlow / Keras / PyTorch / MXNet version

Checklist

  • [ ] My code follows the style guidelines of this project

  • [ ] This code defines a class " PoisoningAttackCleanLabelBackdoor" that performs a true clean label backdoor robust attack.

  • [ ] When the poison method is called, it applies the trigger function to the input data and returns the poisoned data with the same clean labels as the original data and applies an alpha factor to make the attack very imperceptible even if the audio trigger has a high volume!

OrsonTyphanel93 avatar Sep 07 '23 16:09 OrsonTyphanel93