caffe-model icon indicating copy to clipboard operation
caffe-model copied to clipboard

Can't reproduce the reported accuracies in Inception-V4 and resnext101-64x4d.

Open latifisalar opened this issue 6 years ago • 8 comments

Hi, I'm facing problems with the caffe models of Inception-V4 and resnext101-64x4d. It's not possible for me to get the reported accuracies and I just get around 0.1% which is just a random guess. I've tried it using either my own python script(which is derived from caffe example) or the provided script(I'm aware of the crop_size and base_size and change them accordingly). I've downloaded the validation images from ImageNet and using their own val.txt which is sorted unlike yours. Do you have any idea what can be the problem? Thanks

latifisalar avatar Oct 13 '17 17:10 latifisalar

I think you should pay attention on image pre-processing:

The RGB channel or BGR channel; The mean value and std value

soeaver avatar Oct 20 '17 15:10 soeaver

I am seeing the same problem as @latifisalar that I am not able to run Inception-V4 and see the desired accuracy on ILSVRC2012_val set. I am using mean val as 128 and crop size 395. @soeaver Could you share your transform_param that you used for the test?

ashishfarmer avatar Oct 27 '17 02:10 ashishfarmer

Never mind. Figured it out - it is in the evaluation_cls.py

ashishfarmer avatar Oct 27 '17 21:10 ashishfarmer

I am using the classification script, with the correct configuration parameters on inceptionv3 and v4 and get really bad results. Even single image inference gives wrong results (miss-classified). The same scripts work well in other networks like vgg but not inception with pre-trained v3 - v4. Do you know how to fix that??

kmonachopoulos avatar Jan 25 '18 16:01 kmonachopoulos

My problem was the label orders. The order of the class labels in Inception networks was different than the orders in the vgg network.

latifisalar avatar Jan 25 '18 17:01 latifisalar

So, what .txt file did you use for the annotation ??

This is a reference of the file I am using :

1: 'goldfish, Carassius auratus',
 2: 'great white shark, white shark, man-eater, man-eating shark, Carcharodon carcharias',
 3: 'tiger shark, Galeocerdo cuvieri',
 4: 'hammerhead, hammerhead shark',
 5: 'electric ray, crampfish, numbfish, torpedo',
 6: 'stingray',
 7: 'cock',
 8: 'hen',

kmonachopoulos avatar Jan 25 '18 17:01 kmonachopoulos

The one you have is for the vgg. I've attached the synsets for the Inception networks. synsets.txt Update: New validation file for Inception: inception_val.txt

latifisalar avatar Jan 25 '18 17:01 latifisalar

I still get wrong results with that annotation list .. I have tried lot of annotation lists that i found online (including the one that you gave me) and it seems that non of them give the correct results... I think that this has to do with the model.

kmonachopoulos avatar Jan 25 '18 18:01 kmonachopoulos