MatchingNetworks icon indicating copy to clipboard operation
MatchingNetworks copied to clipboard

low accuracy at miniImagenet dataset?

Open BoyuanJiang opened this issue 8 years ago • 7 comments

First very thanks for your implement of Matching-Networks with pytorch. I have follow your setup to run the miniImagenet example,the training accuracy can achieve about 100%,but the val and test accuracy is about 40%.In origin paper it's about 57%.So I wonder if where I'm wrong to run your code or can you tell me your result at miniImagenet? logs

This is my logs

BoyuanJiang avatar Aug 27 '17 10:08 BoyuanJiang

Thanks for your review. You did nothing wrong. I am still looking for the reason of this behaviour. I have updated the code to support 5-shot learning with miniImagenet. But still I have low accuracy results with 1 and 5 shot in miniImagenet, with omniglot dataset it works fine. I will look into it as soon as possible. If you find any possible update to the code just let me know.

gitabcworld avatar Aug 29 '17 07:08 gitabcworld

You can change n_samplesNShot to 15, and change selected_class_meta_test in def creat_episodes to selected_classes, stay consistent with the superparam in meta learning lstm code, you will get the val and test accuracy about 55% with no fce, 5-ways, 5-shots in miniImagenet.

ZUNJI avatar Dec 05 '17 09:12 ZUNJI

Hi @ZUNJI Thanks for the tip. :) But I have a question, in create_episodes routine one class is being used as the 'target class'. If we replace the selected_class_meta_test to selected_classes this would remove that one class as target class. I am a bit confused. Can you please elaborate? Thanks. :)

ajeetksingh avatar Jan 01 '18 08:01 ajeetksingh

Yes, you are right. This procedure indicates that the 'target class' is same as support class. You can read the paper<> and code. Our purpose of few-shot learning is recognising target images in novel classes based on support images in novel classes. The essence is computing the samilarities bewteen target images and support images and choose the most similar class of that support image as the predict target label. So this procedure is no problem.

ZUNJI avatar Jan 02 '18 03:01 ZUNJI

Certainly you should insure that the target images is not in support images. So I change n_samplesNShot to 15 then n_samples is 20. Using 5 appending in support set, the rest appending in target set.

ZUNJI avatar Jan 02 '18 03:01 ZUNJI

Sorry, it is not param 'n_samples', the param 'number_of_samples' in def create_episodes. for c in selected_classes: number_of_samples = self.samples_per_class #if c in selected_class_meta_test: number_of_samples += self.n_samplesNShot

ZUNJI avatar Jan 02 '18 06:01 ZUNJI

You can change n_samplesNShot to 15, and change selected_class_meta_test in def creat_episodes to selected_classes, stay consistent with the superparam in meta learning lstm code, you will get the val and test accuracy about 55% with no fce, 5-ways, 5-shots in miniImagenet.

The code randomly used 1 class of the support set as query label . So when we set the n_samplesNShot to 15, whether the number of classes in 'selected_class_meta_test" should be changed? And the another question is the more samples in the query set.Its reasonable that the images of each class is still 1 or 5 in support set as few shot,but the more query samples mean we can calculate more pairs of losses, does it violate the standard setting of the few shot learning? In other words, how to determine the number of query sets,Thank you very much!

sxjaxs avatar Nov 19 '19 07:11 sxjaxs