ryder

Results 22 comments of ryder

Thank you very much for your share. For the FGSM in DEEPSEC, we ran several times, it is always 38.3% with manually set seed = 100. On the other hand,...

The definition of FGSM is as follow: ![image](https://user-images.githubusercontent.com/7763863/55062592-41824280-50b1-11e9-9b74-6a4b93cc128e.png) Do we violated this definition of FGSM anywhere? I am quite sure our implementation exactly do what the attack specifies by using...

From my perspecitive, it is unfair to compare the implementations with different loss functions. When I investigate more with the loss function and change it from _**torch.nn.CrossEntropyLoss()**_ to **_torch.nn.NLLLoss()_**, the...

Fixed in https://github.com/kleincup/DEEPSEC/commit/d4e1181e84beef8e6ef5d5d86d87df015e98fb94 in defining the model for both MNIST and CIFAR10, though it is suggested by PyTorch officially (https://github.com/pytorch/examples/blob/master/mnist/main.py). Nothing needs to be changed in our implementation of FGSM....

> Three of the attacks presented (EAD, CW2, and BLB) are _unbounded_ attacks: rather than finding the “worst-case” (i.e., highest loss) example within some distortion bound, they seek to find...

Thanks. The difference between UMIFGSM and TMIFGSM is equal to the difference between un-target attack and target. More detail can be found in Equation 6-7 and Equation 11-12 of Yinpeng...

> While the idea of adversarial training is straightforward—-generate adversarial examples during training and train on those examples until the model learns to classify them correctly—-in practice it is difficult...

As we stated at the beginning of Section II, "**In this paper, we consider the non-adaptive and white-box attack scenarios, where the adversary has full knowledge of the target DL...

> This framework is designed to "systematically evaluate the existing adversarial attack and defense methods". The research community would be well served by such an analysis. When new defenses are...

> It is a basic observation that when given strictly more power, the adversary should never do worse. However, in Table VII the paper reports that MNIST adversarial examples with...