DiffAttack
DiffAttack copied to clipboard
[question] change image size
I want the generated attacks to have an image size of 256x256.
For that, I thought to change the param res to 256
python main.py --model_name resnet18 --save_dir $save_dir --images_root $image_root --label_path $label_path --res 256 #224
An error shows up then
File "main.py", line 164, in <module>
adv_image, clean_acc, adv_acc = run_diffusion_attack(tmp_image, label[ind:ind + 1],
File "main.py", line 70, in run_diffusion_attack
adv_image, clean_acc, adv_acc = diff_latent_attack.diffattack(diffusion_model, label, controller,
File "/home/user/.conda/envs/DiffPurification/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/home/user/DiffAttack/diff_latent_attack.py", line 374, in diffattack
before_attention_map = aggregate_attention(prompt, controller, 7, ("up", "down"), True, 0, is_cpu=False)
File "/home/user/DiffAttack/utils.py", line 17, in aggregate_attention
out = torch.cat(out, dim=0)
RuntimeError: torch.cat(): expected a non-empty list of Tensors
Besides that, I assume that some parameters must be optimized so that the attack is strong on different parameters. Which parameters are important?
Hi @jS5t3r ,
Thank you for pointing this out. I've just updated the code, and I'm optimistic that this update resolves the resolution parameter setting issue.
Regarding parameter optimization, I suggest focusing on the three key loss weights: --attack_loss_weight, --cross_attn_loss_weight, and --self_attn_loss_weight. By varying these values, you can fine-tune the attack for either more imperceptibility or increased transferability. Additionally, consider adjusting parameters such as --iterations, --diffusion_steps, and --start_step for further customization.
Thanks, where u change that can solve the resolution issues?
Hi @jS5t3r ,
Thank you for pointing this out. I've just updated the code, and I'm optimistic that this update resolves the resolution parameter setting issue.
Regarding parameter optimization, I suggest focusing on the three key loss weights:
--attack_loss_weight,--cross_attn_loss_weight, and--self_attn_loss_weight. By varying these values, you can fine-tune the attack for either more imperceptibility or increased transferability. Additionally, consider adjusting parameters such as--iterations,--diffusion_steps, and--start_stepfor further customization.
Hi @youyuanyi ,
You can refer to the commit here for the details of the changes.
Hope this can help.
Thank you very much!