Question about backprop
Hi Matt,
is it possible to get the gradient from back propagation using core ml or metal? I am trying to implement an adversarial attack on a core ML resnet 50 and don't know how to go about it?
by the way, the tutorials you post are excellent
Core ML: no, gradients are not exposed in the API.
Metal: yes, but you'll have to re-implement the model using the lowest-level MPS primitives, such as MPSCNNConvolutionGradient.
I see. So it would be: 1- implement model using MPS primitives 2- load the weights and bias to MPS model
What happens next? Are there metal functions for forward and backward passes? Or would those also need to be implemented?
It's possible a nicer API is available these days (I haven't used MPS in a while) but in the past you had to implement both the forward and backward pass yourself. So a very simple model would be MPSLinearLayer -> MPSLossFunction -> MPSLinearLayerGradient where the MPSLinearLayerGradient is the backward pass.
This sounds like the current API. Does your book (or any of your articles) have an example implementation of this?
I don't have any examples for this, unfortunately.