easy-few-shot-learning
easy-few-shot-learning copied to clipboard
Inference on one image
Thank you for the sharing. I trained the few shot algorithm you provided on a dataset that I dispose. However, how do I do the inference on one image? I mention that my train and test datasets belong to the same classes, but with different images.
Hi! All methods in EasyFSL implement a forward()
method that can be called on a tensor of queries (typically of shape [number_of_queries * number_of_channels * height * width]
, in your case with number_of_queries = 1
I guess).
Note that you first need to provide a support set of labelled instances to your model by calling the process_support_set()
method.
These methods and their docstring are here. Please let me know if anything is unclear!
Hello, @Amrou7 have you been able to run inference on one image ? I have been trying to get the prediction for a webcam feed any suggestion on how to achieve this ?
Hello @MokhtarOuardi @Amrou7 Are you both able to get the inference for the new images?
Hello @karndeepsingh @MokhtarOuardi. Can you tell me in details what is blocking you?
@ebennequin Thank you for the heads up, i have solved my issues by adding a new dataloader.
@MokhtarOuardi Can you please share your approach? Also, did you run it on the CPU?
@ebennequin Thank you for building this useful library. Would you mind pointing out what the dimension of support_images
and support_labels
should be? I couldn't figure out. Also the link you provided above doesn't work anymore. Thank you.
Hi! We handle the support set as a typical batch in PyTorch, so we expect support_images
to be of shape (n_images, 3, image_size, image_size)
(note that this shape depends on your model's expected input shape, only the n_images
part is a requirement of EasyFSL) and support_labels
to be of shape (n_images,)
. You're making a good point, tensor shapes should be a part of the docstring! I'm putting the enhancement label on this issue so this will be added to the next version.
Thank you for pointing out the broken link, I edited my previous response :)