TransferAttack icon indicating copy to clipboard operation
TransferAttack copied to clipboard

Issue: Interpolation Method Choice Impacts Robustness of Adversarial Examples

Open molarsu opened this issue 10 months ago • 0 comments

Thank you for your great work! I noticed that the choice of interpolation method can significantly affect the performance of input transformations.

In the original DIM implementation, the interpolation option used is NEAREST_NEIGHBOR. However, in your implementation of DIM, or when DIM is used in combination with other methods, I found that some parts of your code—such as this line—use bilinear interpolation instead.

Why This Matters

The difference between NEAREST_NEIGHBOR and bilinear interpolation affects how much detail is preserved in transformed images:

NEAREST_NEIGHBOR is a more coarse-grained interpolation method that results in less smooth transformations, introducing more randomness and diversity in the transformed images. Bilinear interpolation produces smoother transformations, which may reduce the variations introduced during input transformations. Since NEAREST_NEIGHBOR preserves rougher details, it increases the diversity of input transformations, making adversarial examples more robust. Our experiments have also confirmed that using NEAREST_NEIGHBOR instead of bilinear improves the effectiveness of adversarial attacks.

Suggested Fix

I would suggest modifying the implementation to use NEAREST_NEIGHBOR(nearest in pytorch) instead of bilinear interpolation to align with the original DIM approach and enhance the robustness of adversarial examples.

Thanks again for your great work! Looking forward to your thoughts on this issue.

molarsu avatar Mar 09 '25 14:03 molarsu