MobileNet-Caffe
MobileNet-Caffe copied to clipboard
caffemodel
Hi,as vpn service is shutdown,we can hardly access google.So can you share the pretrained caffemodel with us through pan.baidu.com or others?
you can check this repo to download the model: https://github.com/cdwat/MobileNet-Caffe
hi, I tried to train the model by the deploy.prototxt . but it always take wrong ,can you share your trained document like train.prototxt ?
you can not train a model using deploy.prototxt. you can use most part of the deploy.prototxt and write your own train_val.prototxt.
yes, I know that .I had tried to write my own train_val.prototxt file. but it still get wrong. Maybe somewhere is wrong, so I want to can get your own train_val.prototxt file so that I can know where is wrong. Thanks
发自网易邮箱大师
On 9/4/2017 14:40,shicai[email protected] wrote:
you can not train a model using deploy.prototxt. you can use most part of the deploy.prototxt and write your own train_val.prototxt.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.
@LiuRJun please make sure that you add params to control lr and wd for conv and scale layers.
yes, I do that my own train_val.prototxt like this name: "MOBILENET" layer { name: "data" type: "Data" top: "data" top: "label" include { phase: TRAIN } transform_param { scale: 0.017 mirror: false crop_size: 224 mean_value: [103.94,116.78,123.68] } data_param { source: "examples/myMobileNet/english_train_lmdb" batch_size: 42 backend: LMDB } } layer { name: "data" type: "Data" top: "data" top: "label" include { phase: TEST } transform_param { scale: 0.017 mirror: false crop_size: 224 mean_value: [103.94,116.78,123.68] } data_param { source: "examples/myMobileNet/english_test_lmdb" batch_size: 42 backend: LMDB } } layer { name: "conv1" type: "Convolution" bottom: "data" top: "conv1" param { lr_mult: 1 decay_mult: 1 } convolution_param { num_output: 32 bias_term: false pad: 1 kernel_size: 3 stride: 2 weight_filler { type: "msra" } } } layer { name: "conv1/bn" type: "BatchNorm" bottom: "conv1" top: "conv1" param { lr_mult: 0 decay_mult: 0 } param { lr_mult: 0 decay_mult: 0 } param { lr_mult: 0 decay_mult: 0 } } ......
and when I train the net, I got the wrong: [libprotobuf ERROR google/protobuf/text_format.cc:245] Error parsing text-format caffe.NetParameter: 91:15: Message type "caffe.LayerParameter" has no field named "scale_param"
At 2017-09-04 17:24:38, "shicai" [email protected] wrote:
@LiuRJun please make sure that you add params to control lr and wd for conv and scale layers.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.
do you use the official caffe? if yes, please check line 91 of your prototxt.
do i need to add a fc layer after the last conv layer when i write my own train.prototxt?
you dont need add a new layer, just change the name and num_output of the last conv layer. of course, you can also remove the last conv layer, then add a new fc layer.
Hello Shicai,
I hope you are doing well, I wish to ask you how you can add params to control lr and wd for conv and scale layers?
Can you please show me one short example?
Thanks and regards, Manu Goyal.
Hello Shicai,
I hope you are doing well, I wish to ask you how you can add params to control lr and wd for conv and scale layers?
Can you please show me one short example?
Thanks and regards, Manu Goyal.
可以加你微信请教一下吗??