adversarial-robustness-toolbox icon indicating copy to clipboard operation
adversarial-robustness-toolbox copied to clipboard

Implementation of the Dirty Label Backdoor Attack

Open f4str opened this issue 1 year ago • 0 comments

Is your feature request related to a problem? Please describe. Under art.attacks.poisoning, although the PoisoningAttackBackdoor object exists which allows the user to insert backdoors and perform the Dirty Label Backdoor Attack, it must be done manually. There is no actual implementation of this attack that does the full attack.

Describe the solution you'd like Implement the PoisoningAttackDirtyLabelBackdoor object in a similar style and structure as the PoisoningAttackCleanLabelBackdoor object. All of this will be under the art.attacks.poisoning module. A jupyter notebook demo will also be created to demonstrate how to use the attack.

Describe alternatives you've considered N/A

Additional context It is not decided whether the attack should be named PoisoningAttackDirtyLabelBackdoor (following the convention of older attacks like PoisoningAttackCleanLabelBackdoor) or DirtyLabelBackdoorAttack (following the conventions of newer attacks like SleeperAgentAttack). A proper naming convention should be established for all poisoning attacks.

f4str avatar Mar 24 '23 02:03 f4str