Caffe-MobileNetV2-ReLU6 icon indicating copy to clipboard operation
Caffe-MobileNetV2-ReLU6 copied to clipboard

pretrained model

Open suguliwei opened this issue 7 years ago • 4 comments

Can you share the pretraind model of imagenet?That will be a lot helpful for me! Thanks!

suguliwei avatar Jan 25 '18 17:01 suguliwei

@suguliwei sorry, I will not, but I can share some of my training details here, my solver.proto is like below: lr_policy: "multistep" stepvalue: 300000 stepvalue: 600000 stepvalue: 900000 base_lr: 0.045 momentum: 0.9 max_iter: 1000000 weight_decay: 4e-5 gamma: 0.1 solver_type: SGD batch_size: 128 And I use Inception augmentation as well. You can try to train you own model.

rog93 avatar Jan 26 '18 01:01 rog93

@RuiminChen can you explain why you take multistep policy?just according to test?and how to understand the stepvalue? what is the Inception augmentation?can you provide some informaion?mean=[128.0],scale=0.0078????and some others ???

chunniunai220ml avatar Feb 04 '18 12:02 chunniunai220ml

@chunniunai220ml 1.why multistep policy? just personal experience. 2.how to understand the stepvalue? ImageNet Classfication training set has about 1280k images,300k iteration (128 batchsize) ≈ 30 epochs 3.Inception augmentation means ImageNet data augmentation strategy proposed by InceptionV1,you can get more details here: https://arxiv.org/abs/1409.4842 4.BGR mean values [103.94,116.78,123.68] are subtracted and scale=0.017 is used as std values for image preprocessing. Hope these helpful for you.

rog93 avatar Feb 04 '18 16:02 rog93

@RuiminChen thanks for you reply timely. I have read the paper,and I want to know the data augmentation how important it is. the paper set resize_ratio:8-100%,aspect_ratio:[3/4,4/3],and distortions refer a paper,it also use random interpolation,have you done all of these measures in data layer? more importantly,you add these data augmentation in caffe layer or python layer or some other ways?

chunniunai220ml avatar Feb 05 '18 08:02 chunniunai220ml