adversarial-attacks-pytorch icon indicating copy to clipboard operation
adversarial-attacks-pytorch copied to clipboard

[feature] modify loss

Open Freed-Wu opened this issue 3 years ago • 2 comments

By default,

        loss = nn.CrossEntropyLoss()

Can it be changed to other function to make this package more feasible and customizable?

Freed-Wu avatar Jul 21 '22 02:07 Freed-Wu

The loss function of each method is a key part of the attack. From this point of view, for me, a customizable loss function for all attacks is not feasible right now. However, I agree that customizable loss can improve the usefulness of the package. How about making a new attack class that can change the loss based on PGD?

Harry24k avatar Sep 23 '22 12:09 Harry24k

How about making a new attack class that can change the loss based on PGD?

Great!

Freed-Wu avatar Sep 23 '22 13:09 Freed-Wu

Like

class PGD(Attack):
    def forward(self, images, labels, loss = None):
        r"""
        Overridden.
        """
        images = images.clone().detach().to(self.device)
        labels = labels.clone().detach().to(self.device)

        if self.targeted:
            target_labels = self.get_target_label(images, labels)

        if loss is None:
            loss = nn.CrossEntropyLoss()

User can customize loss, and it will not break the compatibility.

Freed-Wu avatar Jan 17 '23 08:01 Freed-Wu

Perhaps optimizer can also be customized. And how about provide a hook function to allow user record some data to tensorboard?

Freed-Wu avatar Jan 17 '23 11:01 Freed-Wu

a hook function to allow user record some data

I think this is going to be pretty useful. I also opened an issue (#130 ) discussing the output format, such that we can have more information collected at the end.

Perhaps optimizer can also be customized

I am not sure that's how it works. Essentially, for the gradient-based attacks, like BIM, FGSM, PGD, they are just the optimizers themselves. From a naïve behavioural perspective, optimizer (in PyTorch) is just taking your gradients (maybe some previous states as well) and tell you the step size to go.

cestwc avatar Mar 21 '23 00:03 cestwc

Here is a modified version of UPGD following your pull record: https://github.com/Harry24k/adversarial-attacks-pytorch/commit/100047f5f3e41b339fa8c20c2b6d311ded1dd909.

Harry24k avatar Mar 25 '23 06:03 Harry24k

For optimizer customizing, it's quite difficult to modify all attacks to attain a customizable optimizer. I will remain this as future work.

Harry24k avatar Mar 25 '23 07:03 Harry24k