MAML-Pytorch icon indicating copy to clipboard operation
MAML-Pytorch copied to clipboard

Some ideas about parameters updating.

Open zhaoyu-li opened this issue 5 years ago • 2 comments

Thanks for your good implementation of MAML, however, I think that maybe using state_dict() and load_stat_dict() is much easier than modifying all the weights (in learner.py forward), can I first deepcopy the net parameters(state_dict()) and use the fast weights (also use a optimizer to update, instead of list(map(lambda p: p[1] - self.update_lr * p[0], zip(grad, self.net.parameters()))) ), then load the origin parameters back to update the meta learner? Thanks.

zhaoyu-li avatar Jan 30 '20 02:01 zhaoyu-li

I also think it's too complicated to redefine the initialization parameters for each layer. Is there any way to make any network (such as ResNet) put into the MAML frame without defining each layer?

im-wll avatar Mar 31 '20 12:03 im-wll

I wonder if anyone has successfully implemented this, as I haven't. It appears any load operation or attempt to backprop in an alternative network would remove the computational graph.

I have been relying on redefining every layer for deeper networks so it would really help if this works.

shiliang26 avatar Jan 08 '21 07:01 shiliang26