DEEPSEC icon indicating copy to clipboard operation
DEEPSEC copied to clipboard

Paper does not report attack success rate for targeted adversarial examples

Open carlini opened this issue 5 years ago • 2 comments

When measuring how well targeted attacks work, the metric should be targeted attack success rate. However, Table V measures model misclassification rate. This is not the right way to do measure it.

It's also unclear why PGD and BIM are listed as untargeted attacks and not as targeted attacks, when it works both ways (i.e., CW2 is the same and could just as easily be classified as an untargeted attack).

carlini avatar Feb 26 '19 05:02 carlini

When measuring how well targeted attacks work, the metric should be targeted attack success rate. However, Table V measures model misclassification rate. This is not the right way to do measure it.

It's also unclear why PGD and BIM are listed as untargeted attacks and not as targeted attacks, when it works both ways (i.e., CW2 is the same and could just as easily be classified as an untargeted attack).

We agree with you that when measuring how well targeted attacks work, the metric should be targeted attack success rate, and we actually measure and analyze the targeted attack success rate of targeted attacks in Table III and section IV.A of the paper.

However, Table V does not measure the success rate of attacks, while it measures classification accuracies of defense-enhanced models (targeted success rate of attacks should less than or equal to 100% - accuracy of defense). Again, in non-adaptive scenarios, for defense-enhanced models, defenders do not need to know which type of attack belongs to (targeted or non-targeted). The only purpose for defenders is to classify the adversarial examples correctly, so we evaluate the classification accuracies of defense-enhanced models against successful adversarial examples in Table V.

ryderling avatar Mar 15 '19 16:03 ryderling

It's great that you do measure this for the attacks against the undefended model. But I still care about how well targeted attacks work even when considering defended models from the perspective of the adversary.

For example, for LLC you report that the average model accuracy is 39.4% whereas ILLC has an average model accuracy of 50.9%. It may very well be the case that ILLC is better at generating targeted adversarial examples on defended models, however. But the current data doesn't show this.

Compared to all the other significant issues, this point is very minor. It's just something that I would have liked to see for evaluating attacks.

carlini avatar Mar 16 '19 19:03 carlini