easy-few-shot-learning icon indicating copy to clipboard operation
easy-few-shot-learning copied to clipboard

How to take Inference on CPU, How to load support set?

Open lakshaychhabra opened this issue 2 years ago • 2 comments

Hey, @gabrielsicara I trained a model using GPU on google colab but was unable to take its inference on the CPU. Also how to make inference for any input single image on CPU? Like how to define a support set that the model will use for processing? I m trying to explore these on my own and will definitely do a pull request. But if you can answer those it will be helpful. Tried different ways of loading as well. It works fine on GPU.

Screenshot 2022-01-06 at 12 23 08 AM Screenshot 2022-01-06 at 12 24 35 AM

Kind of always get strucked here Screenshot 2022-01-05 at 11 35 42 PM

lakshaychhabra avatar Jan 05 '22 18:01 lakshaychhabra

So if someone is having an issue with making it work on CPU: Use num_workers=0 in dataloader definition, there's a memory leak in PyTorch implementation. Other than that I'm still figuring out creating a separate support set that can be loaded and used for inference directly.

lakshaychhabra avatar Jan 10 '22 11:01 lakshaychhabra

Hi @lakshaychhabra , I'm very sorry for the late response.

All few-shot classifiers have a process_support_set method that must be called before inferring on a query set. Is this what you need?

ebennequin avatar Mar 22 '22 09:03 ebennequin