Example use of Some of the attacks
Is your feature request related to a problem? Please describe.
Hi, sorry to bother you. I would like to request if the contributors can either provide a detailed explanation or the examples to use different attack methods that utilize different classifier types such as CLASSIFIER_LOSS_GRADIENTS_TYPE, CLASSIFIER_TYPE, and CLASSIFIER_NEURALNETWORK_TYPE. It is difficult and challenging to know to find out what is the exact parameter need of the attack method and how one trained classifier (say VGG using Keras blackened) can be used against various attacks.
If is there any way we can convert the classifier types from one class to another, the solution towards that can also be helpful. I think it can increase the functionality of the toolbox and the users can then use the toolbox exntensively.
Describe the solution you'd like A clear and concise description of what you want to happen.
Describe alternatives you've considered A clear and concise description of any alternative solutions or features you've considered.
Additional context Add any other context or screenshots about the feature request here.
Hi @akshayag Thank you very much for your proposal. Although we have examples and tutorials in examples and notebooks supported with the ART documentation on readthedocs, I understand your proposal and think it would be very useful and the right time to create a more systematic introduction to all the new tools, methods, conventions and possibilities that were introduced in the recent releases. We'll try to provide material in this direction as soon as possible.
In the meantime, if you have specific questions, please feel free to ask them here!
I have also the same request. Could you provide more examples for more attacks with different models?