adversarial-attacks-pytorch icon indicating copy to clipboard operation
adversarial-attacks-pytorch copied to clipboard

PGD attack for Randmized Smoothing

Open YijiangPang opened this issue 3 years ago • 0 comments

Hi, I implemented the PGD (L2 and Linf) attack for randomized smoothing, which refers paper - Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers and its associated repo - https://github.com/Hadisalman/smoothing-adversarial/.

Attacking randomized smoothing aims to find the perturbation that fool the following noising operation of randomized smoothing most, i.e. find the 'p' for 'x', then add random noise 'd' on 'x+p' for 'i' times, then 'i' data points, (x+p)+d_i, will fool the smoothed classifier most.

YijiangPang avatar Aug 08 '22 22:08 YijiangPang