Caffe-LMDBCreation-MultiLabel
Caffe-LMDBCreation-MultiLabel copied to clipboard
label lmdb not right
when i created lmdb follow the code ,i get the error : inner_num_ == bottom[1]->count() (1 vs. 20) Number of labels must match number of predictions; 20 is the number of classes, it show the label lmbd is not right, can not create one image with 20 lables.
Please check that the data you are inputting to the create_label_lmdb.py is in the format N x 20 (where N is the number of training images). Also, have you changed the prototxt appropriately, where the num_output in the fc8 layer should be 20 !! Other than this, I dont see why this might happen !! Hope this helps !!
thanks for your answer. yeah, the label is in the format N x 20, and each line value is [0, 1, 0, 1, ..., 1, 0], length is 20, where 1 means belongs to the index class, 0 not, like this: [[ 1. 0. 0. ..., 0. 0. 0.] [ 0. 0. 0. ..., 0. 0. 0.] [ 0. 0. 0. ..., 0. 0. 0.] ..., [ 0. 0. 0. ..., 0. 1. 0.] [ 1. 0. 0. ..., 0. 0. 0.] [ 0. 0. 1. ..., 0. 0. 0.]] (1540, 20)
then the num_output in the fc8 layer is 20
the create_label_lmdb.py core code is :
This looks correct and should not give the error that you are mentioning !! However please make sure that you are not using a single label loss like softmax but a multiple label loss like sigmoid cross entropy !!
Also I think you should input labels in [0,255] for creating label LMDB (see README) as caffe rescales the values by 255 !! Your current label feed will not cause any error but will effect training !!
I use the SigmoidCrossEntropyLoss, so that is why i doubt!i did not use your create_data_lmdb.py, instead of caffe own convert_imageset, is that influences??
That should not affect in case you are using two data layers, one for data and one for labels as shown in the attached prototxt !!
thanks, i will try it again!
i got why, the accuracy style is wrong
you konw how to test the accuracy when train like above?
For training I reckon phase should be TRAIN !!