advertorch icon indicating copy to clipboard operation
advertorch copied to clipboard

A Toolbox for Adversarial Robustness Research

Results 28 advertorch issues
Sort by recently updated
recently updated
newest added

This issue exists because [this line](https://github.com/BorealisAI/advertorch/blob/master/advertorch/attacks/fast_adaptive_boundary.py#L22-23): `from advertorch.attacks.utils import zero_gradients` which refers to the `zero_gradients` function in PyTorch is obsolete. This is because `zero_gradients` is [removed](https://github.com/pytorch/pytorch/blob/master/torch/autograd/gradcheck.py) from PyTorch 1.9. It...

Hello, I'm pushing here an issue I've encountered with the `MomentumIterativeAttack` object when using the `ord=2` argument. Sometimes, the computation will result in perturbed outputs (e.g. a batch of 64...

Hi, May I know will this tool support tabular data? If so, may you give an example?

Thanks for this awesome toolbox. When I try to attack MNIST using the CarliniWagnerL2Attack, the test results indicated that the attack was not successful. Here comes the code: ``` testset...

This implementation is based on Carlini’s original [Tensorflow implementation](https://github.com/carlini/nn_robust_attacks/blob/master/li_attack.py) . The main differences between the original and this one are: * Carlini’s implementation works on only one image at the...

Hello! I was wondering if the framework has an implementation of the black box surrogate model :) I found it in the old version of it but not the recent....

Add a simple spatial transform attack was proposed at [ICML2019](http://proceedings.mlr.press/v97/engstrom19a.html). I benchmarked with mnist and cifar10. I didn't include the cifar10 trained model by resnet18 in the repository because it...

I'm trying to perform PGD attack on YOLOv3 model pretrained on PASCAL VOC dataset. As soon as i pass image and label to perturb function, I get an error AttributeError:...

Hi, first want to say thanks for your effort to make research much easier! Recently, some papers, among which i cites two, create adversarial images on multiple models to increase...

can i perform adversarial attack on multiple gpus? how to configure the setting on multi-gpu