nn_robust_attacks icon indicating copy to clipboard operation
nn_robust_attacks copied to clipboard

L2 regularization term is squared. Why here specifically? Which impact?

Open ego-thales opened this issue 2 years ago • 0 comments

Hello @carlini,

Reading through your paper and your code, I noticed that for the $L^2$ attack, you use a regularization term $\Vert\delta\Vert_2^2$. But all the time except one in your paper, you mention $\Vert\delta\Vert_p$ (no square): in section A or for $L^{\infty}$ attack. Furthermore, Szegedy et al. also used it without square.

Questions:

  1. Is this done purposefully?
  2. Is it discussed anywhere?
  3. Are you sure about the impact (or absence thereof) of the exponent on the results?

Thanks and congrats for achieving your goal:

We hope our attacks will be used as a benchmark in future defense attempts to create neural networks that resist adversarial examples.

Élie

ego-thales avatar Jan 10 '24 12:01 ego-thales