evaluating-adaptive-test-time-defenses
evaluating-adaptive-test-time-defenses copied to clipboard
Evaluating the Adversarial Robustness of Adaptive Test-time Defenses
Francesco Croce*, Sven Gowal*, Thomas Brunner*, Evan Shelhamer*, Matthias Hein, Taylan Cemgil
https://arxiv.org/abs/2202.13711
Case study
We evaluate the following defenses:
-
yoon_2021
: Adversarial Purification with Score-based Generative Models -
hwang_2021
: AID-purifier: A light auxiliary network for boosting adversarial defense -
wu_2021
: Attacking Adversarial Attacks as A Defense -
shi_2020
: Online Adversarial Purification based on Self-Supervision -
kang_2021
: Stable Neural ODE with Lyapunov-Stable Equilibrium Points for Defending against Adversarial Attacks -
mao_2021
: Adversarial Attacks are Reversible with Natural Supervision -
qian_2021
: Improving Model Robustness with Latent Distribution Locally and Globally -
alfarra_2021
: Combating Adversaries with Anti-Adversaries -
chen_2021
: Towards Robust Neural Networks via Close-loop Control
Some folders have a single Python notebook while other contain more involved code.
As a result, such folders will contain a run_eval.sh
with the commands to run the evaluations or an explanatory README.md
file.
The pre-trained models have to be downloaded following the indications in the corresponding folders and papers,
together with the details provided in the appendix of our paper.