dgd_person_reid
dgd_person_reid copied to clipboard
Training a model using the mixed dataset with JSTL - extactly as mentioned - But the result is too bad. The loss is not decreasing as 'archived'
Hi,
I have run the JSTL scripts/exp_jstl.sh
for training as the Readme explained. I didnot miss a single step from the top. But the output accuracies are tremendously bad for all the databases. I have attached the results here.
(It had gone through 55,000 iterations) I0208 15:25:00.855653 31178 solver.cpp:410] Snapshotting to binary proto file external_iter_55000.caffemodel I0208 15:25:01.026494 31178 solver.cpp:705] Snapshotting solver state to binary proto ts/jstl/jstl_iter_55000.solverstate I0208 15:25:01.125866 31178 solver.cpp:296] Iteration 55000, loss = 6.55498 I0208 15:25:01.138092 31178 solver.cpp:316] Iteration 55000, Testing net (#0) I0208 15:25:09.126555 31178 solver.cpp:373] Test net output #0: accuracy = 0.01851 I0208 15:25:09.126617 31178 solver.cpp:373] Test net output #1: loss = 6.57055 (* I0208 15:25:09.126626 31178 solver.cpp:301] Optimization Done. I0208 15:25:09.126631 31178 caffe.cpp:191] Optimization Done.
After extraction over, the result is
cuhk03 top-1 7.2% top-5 36.7% top-10 73.5% top-20 85.9%
cuhk01 top-1 2.1% top-5 10.3% top-10 20.6% top-20 41.2%
prid top-1 1.0% top-5 5.0% top-10 10.0% top-20 20.0%
viper top-1 1.3% top-5 6.3% top-10 12.7% top-20 24.1%
3dpes top-1 2.0% top-5 10.2% top-10 21.0% top-20 42.4%
ilids top-1 1.8% top-5 9.4% top-10 19.0% top-20 37.6%
. May I know, did I miss any thing or do I need to change anything to get back the JSTL training accuracy as per paper table-3 output by scratch training?
And the loss is not also decreasing as in the archived logs
Thanks in advance
I tried further with exp_individual.sh. The same problem exists here too. The loss is not decreasing. It is wobbling around 5.95 to 5.5. Though learning rate is decreasing, the loss is not at decreasing as in the folder 'archived' log
Have you solved the problem yet? I meet exactly the same problem.
@TitaniaXi It is not yet solved
Have you solved this problem yet? My results are as bad as yours. cuhk03 top-1 9.0% top-5 45.6% top-10 88.6% top-20 92.5%
cuhk01 top-1 2.1% top-5 10.3% top-10 20.6% top-20 41.2%
prid top-1 1.0% top-5 5.0% top-10 10.0% top-20 20.0%
viper top-1 1.3% top-5 6.3% top-10 12.7% top-20 24.1%
3dpes top-1 2.3% top-5 11.5% top-10 23.2% top-20 47.4%
ilids top-1 2.4% top-5 11.2% top-10 22.9% top-20 45.6%
Have you guys solved this problem? @Cysu
I achieved the good result last year in ubuntu14.04 + nvidia1080 +cuda8 +cudnnv4
I met the problem in ubuntu16.04 + nvidia1080ti +cuda8 +cudnnv4.
The loss decrease very slowly .even testing on the pre-trained caffemodel,the result is also bad .Maybe there is something wrong when installing this caffe ,seems like the self-made bn layer doesnt work ?
@vinojhosan @xixity @nullmax
I finally found the reason.
The problem is that Nvidia1080ti is not compatible with cudnnV4.
I add the bn layer to another caffe version which using cudnnV5, solving this problem.
You can test your video card compatibility by runing the mnist example in caffe. In my situation loss didn’t decrease , and the accuracy was low.