iResNet
iResNet copied to clipboard
training scheme
First of all. Thank you for sharing the code. I have some questions for training scheme which is confused with training prototxt.
In the <train_rob_stage_one.prototxt> https://github.com/leonzfa/iResNet/blob/master/models/ROB_training/train_rob_stage_one.prototxt, iResNet is trained with four different datasets. Even in the <train_rob_stage_two.prototxt> it goes the same way as the first prototxt did.
I am wonder how you finetuned KITTI dataset, especially for CVPR 2018.
I also want to get the answer.
I think I have found the answer from the caffe doc.
- Fine-tuning requires the
-weights model.caffemodel
argument for the model initialization.
For example, you can run:
# train LeNet
caffe train -solver examples/mnist/lenet_solver.prototxt
# train on GPU 2
caffe train -solver examples/mnist/lenet_solver.prototxt -gpu 2
# resume training from the half-way point snapshot
caffe train -solver examples/mnist/lenet_solver.prototxt -snapshot examples/mnist/lenet_iter_5000.solverstate
For a full example of fine-tuning, see examples/finetuning_on_flickr_style, but the training call alone is
# fine-tune CaffeNet model weights for style recognition
caffe train -solver examples/finetuning_on_flickr_style/solver.prototxt -weights models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel