MAML-Pytorch
MAML-Pytorch copied to clipboard
About the Model vars(parameters) updating in Meta-training state
I see that you use a variable "vars" to store all the variables for the model, I think it's cool and clear, however, my questions is: "Why not using Pytorch's predefined function---- state_dict() and load_state_dict" to update the vars?
Thanks!
My understanding is that load_state_dict results in changing the self.net.parameters(). The meta update is with respect to the base parameters self.net.parameters(). If the values change, the gradient is with respect to fast_weights (meta.py line 137). Is it right?