PyTorch-MAML
PyTorch-MAML copied to clipboard
why loading pile files instead of loading images?
I saw other implementations loading images from the train/val/test folders as the preprocessing step. https://github.com/yaoyao-liu/mini-imagenet-tools I am just curious if using pickle to load the data all at once is memory efficient? https://github.com/fmu2/PyTorch-MAML/blob/master/datasets/mini_imagenet.py#L18-L32
It happens that the mini-ImageNet dataset (with input size 84x84) is small enough to fit in memory. Usually this means faster data loading but in the case of MAML, the main bottleneck is actually in the nested optimization loop, so loading data all into memory does not save much time.