tensorflow-adversarial icon indicating copy to clipboard operation
tensorflow-adversarial copied to clipboard

Any adversarial attack that sustains after resize attack

Open BalaMallikarjuna-G opened this issue 5 years ago • 6 comments

Hi,

This is Bala. I have a query regarding adversarial attack.

Is there any adversarial attack that sustains/consists of added noise, after resize attack ? (adversarial image -> converting into High / low resolution image -> resize to original adverarial image size)

Thanks, Bala

BalaMallikarjuna-G avatar Dec 16 '19 11:12 BalaMallikarjuna-G

Hi Bala, you mean attack or defense? I don't quite follow your question.

gongzhitaao avatar Dec 16 '19 14:12 gongzhitaao

Hi Sir, Sorry for late reply. I was out of work for last few days.

Thanks for your reply. I need a attack which is robust after resize attack. I am expecting like below: target noise added to image of size 28x28 -> resize to 65x75 -> resize to 28x28 and target label should available or sustain. Tested the process and observed like below: target noise added to image of size 28x28 -> resize to 65x75 -> resize to 28x28 and observed that target label disturbed (not available). I checked with PGD and BIM attacks using foolbox. the added noise is removed or disturbed. please let me know your suggestions.

On Mon, Dec 16, 2019 at 8:17 PM Zhitao Gong [email protected] wrote:

Hi Bala, you mean attack or defense? I don't quite follow your question.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/gongzhitaao/tensorflow-adversarial/issues/11?email_source=notifications&email_token=ALTLTFXUVAXW4URY4E3N623QY6IG7A5CNFSM4J3IDDF2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEG66CEI#issuecomment-566092049, or unsubscribe https://github.com/notifications/unsubscribe-auth/ALTLTFRN3IVXVE5TRIOR2F3QY6IG7ANCNFSM4J3IDDFQ .

BalaMallikarjuna-G avatar Dec 29 '19 02:12 BalaMallikarjuna-G

So if I understand it correctly, you want an attack that survives the resizing, right?

The resizing attack is a bit confusing, it should be resizing defense?

As far as I know, resizing is not an effective method to defend against the adversarial images. It will lower the attack success rate, but will not solve the problem. Basically many of the adversarial examples are still adversarial even after resizing. Some of the early papers, (e.g., FGSM) on adversarial examples have related results.

Hope this helps.

gongzhitaao avatar Dec 29 '19 19:12 gongzhitaao

Hi @gongzhitaao , What do you think about advface[1] or amora[2]. These adversarial attacks changing a few pixels in image. So, I think these methods more vulnerable to resize operation. What do you think about this ?

1: https://arxiv.org/abs/1908.05008 2: https://arxiv.org/abs/1912.03829

m-pektas avatar Sep 02 '20 13:09 m-pektas

Hey @mhmddpkts, I haven't read the papers yet. Sorry I'm not working on adversarial attack/defense now (it was long time ago), so my opinions might be outdated. :smile:

gongzhitaao avatar Sep 02 '20 18:09 gongzhitaao

When I search this problem in google, I found this page 😅 Therefore, I asked you. Anyway, thanks your reply @gongzhitaao

m-pektas avatar Sep 02 '20 22:09 m-pektas