CIFAR10-img-classification-tensorflow icon indicating copy to clipboard operation
CIFAR10-img-classification-tensorflow copied to clipboard

get high test prediction but the prediction on images are wrong

Open chihchiehchen opened this issue 6 years ago • 18 comments

Hello Sir,

Thanks for sharing your code. I try to train a model based on your code, however something strange happened: when I run block [60] of CIFAR10_image_classification.ipynb , I can get test accuracy around 74%, but the predictions of images below are very wrong. Could you do me a favor to guide me how to solve this issue?

In any case thanks for your patient and help.

Best, Chih-Chieh

chihchiehchen avatar Nov 25 '18 14:11 chihchiehchen

Hi @chihchiehchen

First of all, I need to see what the images look like. If the images are not any similar to the CIFAR-10 dataset, the prediction accuracy will probably be very low.

deep-diver avatar Nov 26 '18 00:11 deep-diver

Hello,

I used Cifar-10 test set. I mean, I did not modify anything but follow your code. When I run block [60] of CIFAR10_image_classification.ipynb , I get 74% (compared with 58 % shown in the ipynb file), but for softmax prediction of images ( mainly the function display_image_predictions ), the prediction is always strange (as the pictures shown below, it is never correct). I am not sure if I misunderstood anything or there is something required to be modified.

chihchiehchen avatar Nov 26 '18 00:11 chihchiehchen

Aha I get your point.

Did you run the notebook yourself? or are you just reading it through? I made some changes to the last one with saved checkpoint.

Let me re-run the notebook, and I will let you know soon

deep-diver avatar Nov 26 '18 01:11 deep-diver

Hello,

I ran the notebook from the top to the bottom. In any case thanks a lot!

chihchiehchen avatar Nov 26 '18 01:11 chihchiehchen

I have just re-run the notebook after cloning the repo entirely.

And I got Testing Accuracy of 0.728...

Let me think what could go wrong in your case

deep-diver avatar Nov 26 '18 01:11 deep-diver

Hello,

How about the softmax prediction below? Is it correct? In my case the test accuracy is also high, but the problem is that the softmax prediction (of random samples from the last few lines of test_model())are very strange.

In any case thanks for help.

chihchiehchen avatar Nov 26 '18 01:11 chihchiehchen

ok. that might be some kind of indexing problem when displaying with matplotlib. I will look into it and fix the thing. But the model and its behaviour is ok I guess.

deep-diver avatar Nov 26 '18 01:11 deep-diver

Hello,

I also hope so (but cannot figure out where is the problem).

In any case thanks for taking your time on this, I also learn something from the discussion.

chihchiehchen avatar Nov 26 '18 02:11 chihchiehchen

ok.

the thing is the name on top of the picture is the ground truth label, not the predicted one. and the bar graph on the right hand side is the predicted result.

so it could look a bit strange. However, you can simply compare the ground truth and the predicted result side by side.

deep-diver avatar Nov 26 '18 02:11 deep-diver

Hello,

But I think the problem is that since the test accuracy is high, in some sense the label and the predicted result shall coincide( am I wrong? ), but I try several times and the predicted results are always different from the labels.

chihchiehchen avatar Nov 26 '18 02:11 chihchiehchen

Is it? In my case, I got 4 out of 5 correct. If you don't get the right result all the time, check if your results are any closer to the 2nd place

deep-diver avatar Nov 26 '18 04:11 deep-diver

Hello,

Sorry but I still get bad results ( usually the label are not in the top-3 predicted classes ) on softmax prediction. Maybe I really did something wrong and need some time to figure it out.

In any case thanks for sharing the idea and giving me some suggestions, maybe I can provide some feedback once I figured out what I did wrong.

Thanks a lot.

Chih-Chieh

chihchiehchen avatar Nov 26 '18 06:11 chihchiehchen

This is my jupyter notebook softmax prediction: softmax

It seems that the prediction was wrong.

panovr avatar Nov 27 '18 13:11 panovr

Hello,

I do get the same problem : running your program without changing anything, I get a high accuracy, but the predictions on the random samples are all different from the true labels. Have you found out what should be modified ?

Thank you for your answer :)

megalinier avatar Feb 04 '19 15:02 megalinier

Hey,

I think the code for the printing of the random samples has some mistake in it, indeed, but the rest of the algorithm is good =)

If you want to print some random examples with their predicted labels, here's a simple code you can use (inside the " with tf.Session(graph=loaded_graph) as sess: " statement):

    for num_samples in range(0,n_samples):
        num_test = random.randint(0,10000)
        test_feat = test_features[num_test,:,:,:]
        test_feat_reshape = test_feat.reshape(1,32,32,3)
        test_label = test_labels[num_test,:].reshape(1,10)
        label_ids = label_binarizer.inverse_transform(np.array(test_label))

        test_prediction_ind = sess.run(
           tf.math.argmax(tf.nn.softmax(loaded_logits),axis=1),
           feed_dict={loaded_x: test_feat_reshape, loaded_y: test_label, loaded_keep_prob: 1.0}) 
        
        plt.imshow(test_feat) 
        plt.title('True label : ' + list_label_names[label_ids[0]] + \
                  ' - Predicted label :' + list_label_names[test_prediction_ind[0]])
        plt.show()
    

megalinier avatar Feb 04 '19 16:02 megalinier

Thanks. Can you contribute your code?

deep-diver avatar Feb 07 '19 05:02 deep-diver

If this is what I have found then: axies[image_i][1].set_yticklabels(pred_names[::-1]) needs to be axies[image_i][1].set_yticklabels(pred_names[::])

rxk900 avatar Feb 13 '19 02:02 rxk900

OK, this is odd. I modified the notebook to use my own images: image

so going back to debugging that notebook I get the original: axies[image_i][1].set_yticklabels(pred_names[::-1]) Now I have two notebooks and not sure why they differ: image

image

rxk900 avatar Mar 10 '19 00:03 rxk900