ryder

Results 22 comments of ryder

> Security is all about _worst_-case guarantees. Despite this fact, the paper makes many of the inferences by looking at the _average_-case robustness. > > This is fundamentally flawed. >...

> On at least two counts the paper choses l_infinity distortion bounds that are not well motivated. > > * Throughout the paper the report studies a CIFAR-10 distortion of...

The argument we made before in this issue is a little strong, and I will modify it as "**It is noticed that the epsilon of gradient-based attacks ranges from 0...

> When measuring how well targeted attacks work, the metric should be targeted attack success rate. However, Table V measures model misclassification rate. This is not the right way to...

> Using the data provided, it is not possible to compare the efficacy of different attacks across models. Imagine we would like to decide whether LLC or ILLC was the...

> Table XIII states that on CIFAR-10 the R+FGSM attack was executed with eps=0.05 and alpha=0.05 whereas the README in the Attack module of the open source code gives eps=0.1...

> If you read the original paper that proposes R+FGSM it defines alpha as the initial step size that's taken randomly, and then (epsilon-alpha) as the gradient step size. So...

Thanks for your suggestion, I will update the parameter 'alpha' as the 'alpha_ratio'.

Already updated the parameter 'alpha' as the 'alpha_ratio' in https://github.com/kleincup/DEEPSEC/commit/2c67afac0ae966767b6712a51db85f04f4f5c565.

> Despite the simplicity of the Fast Gradient Sign Method, it is surprisingly effective at generating adversarial examples on unsecured models. However, Table XIV reports the misclassification rate of FGSM...