mxnet-model-gallery
mxnet-model-gallery copied to clipboard
Inception V3 gives wrong predictions
I guess there is something wrong about the released network or at least the preprocessing code. I tried to use prediction-with-pretrained example but the results are mistaken.
I also realized that output layer has 1008 nodes where as the label txt has 1001 classes
The model is converted from TensorFlow model. Note preprocessing code is different, see preprocessing.py in the zip. Also 1008 is from Google, and 1-1000 is ILVRC2012 label.
I used the provided processing code but still same problem.
I am not sure what is your problem, but on my side it works well, and I verified on ILSVRC 2012 validation set and TensorFlow sample image.
I happen the same problem as erogol, could you show the code how you verify on ILSVRC 2012 validation set?
Same here.
What does wrong result mean? What is your accuracy on imagenet?
First, resize the raw image into 384, then you can do in this way (code I used a month ago)
import mxnet as mx
import numpy as np
val = mx.io.ImageRecordIter(
path_imgrec = "model/val-384.rec",
mean_r = 128,
mean_g = 128,
mean_b = 128,
scale = 0.0078125,
rand_crop = False,
rand_mirror = False,
data_shape = (3, 299, 299),
batch_size = 128)
symbol, arg_params, aux_params = mx.model.load_checkpoint("model/Inception-7", 1)
model = mx.model.FeedForward(symbol=symbol, ctx=mx.gpu(), arg_params=arg_params, aux_params=aux_params, numpy_batch_size=1)
prob = model.predict(val)
import csv
fi = csv.reader(open("old_synset.txt"), delimiter=' ')
old = {}
idx = 0
for line in fi:
old[idx] = line[0]
idx += 1
fi = csv.reader(open("val.lst"), delimiter='\t')
ans = []
for line in fi:
ans.append(old[int(line[1])])
fi = csv.reader(open("model/synset.txt"), delimiter=' ')
new = {}
idx = 0
for line in fi:
new[line[0]] = idx
idx += 1
new_ans = [new[s] for s in ans]
top_1 = 0.
top_5 = 0.
for i in range(len(new_ans)):
sol = new_ans[i]
pred_top5 = prob[i, :].argsort()[::-1][:5]
if pred_top5[0] == sol:
top_1 += 1
if sol in pred_top5:
top_5 += 1
print(top_1 / 50000)
print(top_5 / 50000)
i couldn't download the model, who can share the model with me?
synset.txt might wrong in this model. This leads to a wrong mapping.
Accuracy is not important since the results are obviously flawed for couple of obvious class images that my previous net solves successfully. I also believe that the synset.txt is wrong since number of output nodes and the number of lines in the synset is not matching.
The synset is correct. Again, in Google's released model, there is only 1008 outputs. There is a mapping in old synset and new synset, which I have provided code above. If it wrong, it can't produce 77% accuracy.
The question is where can we find the old_synset.txt?
You can find old_synset.txt in old Inception-BN model.
On Thu, Jan 14, 2016 at 4:07 PM, Shuo Zhang [email protected] wrote:
The question is where can we find the old_synset.txt?
— Reply to this email directly or view it on GitHub https://github.com/dmlc/mxnet-model-gallery/issues/7#issuecomment-171810620 .
I use the code above to test the validation set again, but still get the very low accuracy. I guess there are something wrong in either the Inception-7-0001.params or the synset.txt. Can you test again and add your code here https://s3.amazonaws.com/dmlc/model/inception-v3.tar.gz ,so we can run it directly to get the right result?
I observe that the model works fine with cpu but not with gpu. All top predictions are skewed in gpu setting.
I've also had some problems with this model: it gaves me wrong predictions with cudnn v3. mxnet without cudnn, with cudnn v4, and cpu version worked fine for me.
something wrong with cudnn v3?
Yeap I update to cudnn 4 and problem barely resolved. Thanks for pointing @u1234x1234 .
But still gpu execution gives different top5 ordering in relation to cpu. At least results make sense for both cases.