adversarial-robustness-toolbox icon indicating copy to clipboard operation
adversarial-robustness-toolbox copied to clipboard

Enable PGD attack on PyTorch Faster-RCNN using np object arrays as input

Open lcadalzo opened this issue 4 years ago • 1 comments

Is your feature request related to a problem? Please describe. I'm looking to run PGD on ART's PyTorch Faster-RCNN using the xView dataset. This dataset contains images of varying shapes, so in order to use batch_size > 1, the inputs are stored as 1D numpy object arrays. For example, if using batch_size=16, the input x has shape (16,) and dtype np.object. x[i] would be of dtype float (or int) and of shape (image_i_height, image_i_width, 3).

Describe the solution you'd like Based on the testing I've done locally, it appears there are two places where modifications would need to be made:

  1. The loss_gradient() of the PyTorch Faster-RCNN. The attack will initially crash here. np.stack() requires that the elements in grad_list are of the same shape (which they are not, in my case). A solution could be to check if x.dtype == np.object, and if so, make grads a 1D object array and then loop through the i elements of the batch, assigning grads[i] to the gradient of image i.

  2. The next function where the attack crashes is the _apply_perturbation() of fast_gradient.py, here. The same kind of logic in (1) above could be used to avoid np.clip() breaking.

Additional context It appears this issue might've surfaced recently for another scenario (perhaps ASR?) because I see this kind of logic already implemented here and [here].(https://github.com/Trusted-AI/adversarial-robustness-toolbox/blob/dev_1.4.2/art/attacks/evasion/fast_gradient.py#L374)

lcadalzo avatar Oct 29 '20 21:10 lcadalzo

Hi @lcadalzo Thank you very much for raising this feature request.

Yes, it's correct that PyTorchFasterRCNN is expecting images of the same size in a single array of shape NHWC.

I think this feature would be very useful for datasets with images of various sizes.

We also should check TensorFlowFasterRCNN to provide the same support.

beat-buesser avatar Oct 29 '20 21:10 beat-buesser