MobileNet-SSD icon indicating copy to clipboard operation
MobileNet-SSD copied to clipboard

Shape not matching when fine-tuning on my own dataset

Open mpeniak opened this issue 6 years ago • 7 comments

Hi,

Thanks for this work! I follwed the instructions to get this model trained on my own dataset but got the following error. I am not sure why am I getting this. Could you please help me?

I0116 18:54:42.010017 7985 solver.cpp:75] Solver scaffolding done. I0116 18:54:42.021683 7985 caffe.cpp:155] Finetuning from mobilenet_iter_73000.caffemodel F0116 18:54:42.050644 8045 annotated_data_layer.cpp:205] Check failed: std::equal(top_shape.begin() + 1, top_shape.begin() + 4, shape.begin() + 1) *** Check failure stack trace: *** I0116 18:54:42.061666 7985 upgrade_proto.cpp:77] Attempting to upgrade batch norm layers using deprecated params: mobilenet_iter_73000.caffemodel I0116 18:54:42.061727 7985 upgrade_proto.cpp:80] Successfully upgraded batch norm layers using deprecated params. @ 0x7f60a6760daa (unknown) @ 0x7f60a6760ce4 (unknown) I0116 18:54:42.066272 7985 net.cpp:761] Ignoring source layer conv11_mbox_conf I0116 18:54:42.066344 7985 net.cpp:761] Ignoring source layer conv13_mbox_conf I0116 18:54:42.066370 7985 net.cpp:761] Ignoring source layer conv14_2_mbox_conf I0116 18:54:42.066388 7985 net.cpp:761] Ignoring source layer conv15_2_mbox_conf I0116 18:54:42.066407 7985 net.cpp:761] Ignoring source layer conv16_2_mbox_conf I0116 18:54:42.066423 7985 net.cpp:761] Ignoring source layer conv17_2_mbox_conf @ 0x7f60a67606e6 (unknown) @ 0x7f60a6763687 (unknown) @ 0x7f60a6fc9362 caffe::AnnotatedDataLayer<>::load_batch() @ 0x7f60a6f6c7a9 caffe::BasePrefetchingDataLayer<>::InternalThreadEntry() @ 0x7f60a6ff11d0 caffe::InternalThread::entry() @ 0x7f609c27aa4a (unknown) @ 0x7f608d382184 start_thread @ 0x7f60a509437d (unknown) @ (nil) (unknown) Aborted

mpeniak avatar Jan 16 '18 17:01 mpeniak

I got a similar error, did you solve it?

gombru avatar Feb 09 '18 11:02 gombru

decrease the batch_size in train and test.

RushDon avatar Feb 19 '18 13:02 RushDon

Thanks @RushDon, that solved the issue! Though, affter that a new error appear. The cause was that I was initializing the fine-tuning with the MobileNetSSD caffemodel, instead that with the MobileNet classification model. Affter fixing that, I could use the default batch size (24 for training and 8 for testing), which takes 6.6 GB of my Titan.

gombru avatar Feb 19 '18 15:02 gombru

@gombru It maybe your dataset has 1 or 4 channel images.

ujsyehao avatar Jun 07 '18 12:06 ujsyehao

add force_color: true in transform_param section in train and test prototxt

lincolnhard avatar Dec 04 '18 05:12 lincolnhard

@lincolnhard I have added force_color : true but still the error exit. Kindly share your comments.

shiva13425 avatar Mar 02 '19 20:03 shiva13425

@RushDon @gombru where can I change the batch size?

AbhimanyuAryan avatar Jul 24 '19 11:07 AbhimanyuAryan