nn_robust_attacks
nn_robust_attacks copied to clipboard
L2 regularization term is squared. Why here specifically? Which impact?
Hello @carlini,
Reading through your paper and your code, I noticed that for the $L^2$ attack, you use a regularization term $\Vert\delta\Vert_2^2$. But all the time except one in your paper, you mention $\Vert\delta\Vert_p$ (no square): in section A or for $L^{\infty}$ attack. Furthermore, Szegedy et al. also used it without square.
Questions:
- Is this done purposefully?
- Is it discussed anywhere?
- Are you sure about the impact (or absence thereof) of the exponent on the results?
Thanks and congrats for achieving your goal:
We hope our attacks will be used as a benchmark in future defense attempts to create neural networks that resist adversarial examples.
Élie