HDC.caffe icon indicating copy to clipboard operation
HDC.caffe copied to clipboard

Why do you change initial GoogleNet loss1 output from 1000 to 128? Also,why do you change the lr_mult simultaneously ?

Open jsjs0827 opened this issue 7 years ago • 3 comments

jsjs0827 avatar Aug 17 '18 06:08 jsjs0827

The 128 issue: the author sets the embedding size of each submodel to 128, and finally obtains a 384 ensemble feature vector to represent each sample when testing. This embedding size is refer to "Deep metric learning via lifted structured feature embedding"

XinyiXuXD avatar Sep 27 '18 14:09 XinyiXuXD

So why do you change the lr_mult simultaneously?

abcdvzz avatar Nov 14 '18 12:11 abcdvzz

because the final fc has no pretrain model to fine tune on, but the otherlayers fine tune on googlenet model that pretrained on imagenet dataset

XinyiXuXD avatar Nov 14 '18 12:11 XinyiXuXD