MAML-Pytorch icon indicating copy to clipboard operation
MAML-Pytorch copied to clipboard

Elegant PyTorch implementation of paper Model-Agnostic Meta-Learning (MAML)

Results 45 MAML-Pytorch issues
Sort by recently updated
recently updated
newest added

Hello, First of all, I would like to thank you for your work. I have a question concerning accuracy format. I do not understand why there is more than one...

Hi, has anyone tried only to use Conv, and Linear function without BN? I commented BN in the config, and the result gives me random guess ... For 5-ways, the...

y_spt and y_qry are not the true labels of corrsponding images, are they just the indexes?

I see in your code, just using `self.net(x_spt[i], fast_weights, bn_training=True)` however the torch.autograd.grad() method contain the following parameter: > create_graph (bool, optional) – If True, graph of the derivative will...

Traceback (most recent call last): File "C:/Users/Dingd/Documents/GitHub/MAML-Pytorch-master/omniglot_train.py", line 95, in main(args) File "C:/Users/Dingd/Documents/GitHub/MAML-Pytorch-master/omniglot_train.py", line 55, in main accs = maml(x_spt, y_spt, x_qry, y_qry) File "C:\Users\Dingd\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\nn\modules\module.py", line 489, in __call__ result...

In the code, running_mean/running_var requires_grad are set False. And in the forward code of Learner, training argument in F.batch_norm is set to True. So why?

I just found this code use conv-relu-bn. However it should be conv-bn-relu. Could you please fix it.

in finetunning phase why you set bn_training=True when test net on x_qry ? line 178 、188 and 204 in meta,py

1. About validation accuracy of MiniImageNet in MAML original code, the evaluation is performed on the validation set instead of the test set, but in this code, evaluation is performed...

I used omniglot to train this model, but I found that train_data and test_data used in fine-tune are same classes. Such as 5-way, train_data are [0,1, 2, 3 ,4] and...