easy-few-shot-learning
easy-few-shot-learning copied to clipboard
How to take Inference on CPU, How to load support set?
Hey, @gabrielsicara I trained a model using GPU on google colab but was unable to take its inference on the CPU. Also how to make inference for any input single image on CPU? Like how to define a support set that the model will use for processing? I m trying to explore these on my own and will definitely do a pull request. But if you can answer those it will be helpful. Tried different ways of loading as well. It works fine on GPU.
![Screenshot 2022-01-06 at 12 23 08 AM](https://user-images.githubusercontent.com/25080928/148273006-d8d469e4-6f27-4948-b8f8-3c7cfee887e4.png)
![Screenshot 2022-01-06 at 12 24 35 AM](https://user-images.githubusercontent.com/25080928/148272995-8dfe7351-1507-40ed-aa29-931682db8663.png)
Kind of always get strucked here
So if someone is having an issue with making it work on CPU: Use num_workers=0
in dataloader definition, there's a memory leak in PyTorch implementation.
Other than that I'm still figuring out creating a separate support set that can be loaded and used for inference directly.
Hi @lakshaychhabra , I'm very sorry for the late response.
All few-shot classifiers have a process_support_set
method that must be called before inferring on a query set. Is this what you need?