fast_adversarial
fast_adversarial copied to clipboard
[ICLR 2020] A repository for extremely fast adversarial training using FGSM
Hi! I am having a hard time reproducing the results (on MNIST, for example) and I have found that they differ when I change the pytorch version. I observe the...
Hi, Does anyone can help me to understand why not using clean samples during training? Will it reduce the performance? Thanks~
Bumps [numpy](https://github.com/numpy/numpy) from 1.17.2 to 1.22.0. Release notes Sourced from numpy's releases. v1.22.0 NumPy 1.22.0 Release Notes NumPy 1.22.0 is a big release featuring the work of 153 contributors spread...
Hello, Thanks for your valuable work. I would like to understand the methodology behind the division of epsilon and alpha values with standard deviation. ``` epsilon = (args.epsilon / 255.)...
In the implemenation of fgsm for mnist, you do not clamp the initatial perturbation - meaning you calculate gradient based on out of bounds data points: # delta = torch.zeros_like(X).uniform_(-args.epsilon,...
Hello Leslie Rice and Eric Wong, Congratulations on your significant work!! I found the model is always set to `training` mode during adversarial training period. However, I think when we...
Hi, I'm running the repo with the default configuration for CIFAR-10, however, here is the reported Accuracy I got from the trained model after 15 epochs: ``` Total train time:...
hi, during the training with my custom objective loss, I realized that sometimes the model went wrong and produce "nan" and become invalid; which I didn't face before with other...
Hi, I tried to use this method on CIFAR-100 with the same parameter settings as CIFAR-10. But the results are terrible that the test adversarial accuracies are less than 2%....