CIFAR10-img-classification-tensorflow
CIFAR10-img-classification-tensorflow copied to clipboard
get high test prediction but the prediction on images are wrong
Hello Sir,
Thanks for sharing your code. I try to train a model based on your code, however something strange happened: when I run block [60] of CIFAR10_image_classification.ipynb , I can get test accuracy around 74%, but the predictions of images below are very wrong. Could you do me a favor to guide me how to solve this issue?
In any case thanks for your patient and help.
Best, Chih-Chieh
Hi @chihchiehchen
First of all, I need to see what the images look like. If the images are not any similar to the CIFAR-10 dataset, the prediction accuracy will probably be very low.
Hello,
I used Cifar-10 test set. I mean, I did not modify anything but follow your code. When I run block [60] of CIFAR10_image_classification.ipynb , I get 74% (compared with 58 % shown in the ipynb file), but for softmax prediction of images ( mainly the function display_image_predictions ), the prediction is always strange (as the pictures shown below, it is never correct). I am not sure if I misunderstood anything or there is something required to be modified.
Aha I get your point.
Did you run the notebook yourself? or are you just reading it through? I made some changes to the last one with saved checkpoint.
Let me re-run the notebook, and I will let you know soon
Hello,
I ran the notebook from the top to the bottom. In any case thanks a lot!
I have just re-run the notebook after cloning the repo entirely.
And I got Testing Accuracy of 0.728...
Let me think what could go wrong in your case
Hello,
How about the softmax prediction below? Is it correct? In my case the test accuracy is also high, but the problem is that the softmax prediction (of random samples from the last few lines of test_model())are very strange.
In any case thanks for help.
ok. that might be some kind of indexing problem when displaying with matplotlib. I will look into it and fix the thing. But the model and its behaviour is ok I guess.
Hello,
I also hope so (but cannot figure out where is the problem).
In any case thanks for taking your time on this, I also learn something from the discussion.
ok.
the thing is the name on top of the picture is the ground truth label, not the predicted one. and the bar graph on the right hand side is the predicted result.
so it could look a bit strange. However, you can simply compare the ground truth and the predicted result side by side.
Hello,
But I think the problem is that since the test accuracy is high, in some sense the label and the predicted result shall coincide( am I wrong? ), but I try several times and the predicted results are always different from the labels.
Is it? In my case, I got 4 out of 5 correct. If you don't get the right result all the time, check if your results are any closer to the 2nd place
Hello,
Sorry but I still get bad results ( usually the label are not in the top-3 predicted classes ) on softmax prediction. Maybe I really did something wrong and need some time to figure it out.
In any case thanks for sharing the idea and giving me some suggestions, maybe I can provide some feedback once I figured out what I did wrong.
Thanks a lot.
Chih-Chieh
This is my jupyter notebook softmax prediction:
It seems that the prediction was wrong.
Hello,
I do get the same problem : running your program without changing anything, I get a high accuracy, but the predictions on the random samples are all different from the true labels. Have you found out what should be modified ?
Thank you for your answer :)
Hey,
I think the code for the printing of the random samples has some mistake in it, indeed, but the rest of the algorithm is good =)
If you want to print some random examples with their predicted labels, here's a simple code you can use (inside the " with tf.Session(graph=loaded_graph) as sess: " statement):
for num_samples in range(0,n_samples):
num_test = random.randint(0,10000)
test_feat = test_features[num_test,:,:,:]
test_feat_reshape = test_feat.reshape(1,32,32,3)
test_label = test_labels[num_test,:].reshape(1,10)
label_ids = label_binarizer.inverse_transform(np.array(test_label))
test_prediction_ind = sess.run(
tf.math.argmax(tf.nn.softmax(loaded_logits),axis=1),
feed_dict={loaded_x: test_feat_reshape, loaded_y: test_label, loaded_keep_prob: 1.0})
plt.imshow(test_feat)
plt.title('True label : ' + list_label_names[label_ids[0]] + \
' - Predicted label :' + list_label_names[test_prediction_ind[0]])
plt.show()
Thanks. Can you contribute your code?
If this is what I have found then: axies[image_i][1].set_yticklabels(pred_names[::-1]) needs to be axies[image_i][1].set_yticklabels(pred_names[::])
OK, this is odd. I modified the notebook to use my own images:
so going back to debugging that notebook I get the original:
axies[image_i][1].set_yticklabels(pred_names[::-1])
Now I have two notebooks and not sure why they differ: