captum icon indicating copy to clipboard operation
captum copied to clipboard

Mask for Adversarial Attacks

Open reheinrich opened this issue 3 years ago • 0 comments

Hi, thanks for the great work!

Do you plan to offer the possibility to define a mask for adversarial attacks (PGD, FGSM) in the future?

I'm thinking about a mask that defines to which part of the input the adversarial perturbations should be applied.

This would make it possible to perturb only certain parts of the input, while the rest of the input would remain unchanged.

Thus, adversarial examples could be generated much more flexibly.

Thanks a lot!

reheinrich avatar May 12 '22 16:05 reheinrich