supervised-reptile
supervised-reptile copied to clipboard
Code for the paper "On First-Order Meta-Learning Algorithms"
I used the command below to perform the experiment (environment: Python 3.6 / TensorFlow 1.10): ``` # 1-shot 5-way Mini-ImageNet. python -u run_miniimagenet.py --shots 1 --inner-batch 10 --inner-iters 8 --meta-step...
I run the [sine code](https://gist.github.com/joschu/f503500cda64f2ce87c8288906b09e2d#file-reptile-sinewaves-demo-py) and print out the outerloop updated weight without multiply the `outerstepsize` and the nomal SGD weight, and they are same, even I set the `innerepochs`...
Any plans to release a multi-GPU version of this? It looks like we should be able to run the `meta_batch_size` iterations of the outer loop in `reptile.train_step` in parallel on...
I run the following code: ``` # transductive 1-shot 5-way Omniglot. python -u run_omniglot.py --shots 1 --inner-batch 25 --inner-iters 3 --meta-step 1 --meta-batch 10 --meta-iters 100000 --eval-batch 25 --eval-iters 5...