HDC.caffe
HDC.caffe copied to clipboard
Why do you change initial GoogleNet loss1 output from 1000 to 128? Also,why do you change the lr_mult simultaneously ?
The 128 issue: the author sets the embedding size of each submodel to 128, and finally obtains a 384 ensemble feature vector to represent each sample when testing. This embedding size is refer to "Deep metric learning via lifted structured feature embedding"
So why do you change the lr_mult simultaneously?
because the final fc has no pretrain model to fine tune on, but the otherlayers fine tune on googlenet model that pretrained on imagenet dataset