advertorch
advertorch copied to clipboard
Doubts about implementation of carlini_wagner attack
Thank you so much for the wonderful library. I have one doubt about carlini_wagner attack though. The original paper talks about using 1/2 * (tanh (w) + 1)
in order to ensure the delta values lie between 0 and 1. It seems here the code uses tanh
only to ensure rescaling but the optimization is still done on the values of x
and x + delta
. In that case, why are we doing an extra clipping here? If tanh is only used to ensure values remain within the range, can we use a normal torch.clamp()
function instead? Is there some reason to still use the tanh() function then?
I am a bit confused about the implementation and some pointers would be really appreciated.