deep-landmark icon indicating copy to clipboard operation
deep-landmark copied to clipboard

Hard to repeat the training loss or accuracy as your trained model

Open wenston2006 opened this issue 9 years ago • 3 comments
trafficstars

Hi, thanks for your sharing of the codes at first. I tried to use some new data (20,000 pics) to train the 1_F model through finetuning. I got the test output error of 0.006 during training, but I got a testing accuracy of 0.03-0.04% on five keypoints which are 0.01 more than your model. Moreover, as mentioned in other issues, I got different training loss during every new training period. So can you share some valuable experience on how to training the model ? Can I try repeated finetuning? I mean using the model generated during last finetuning to do a new finetuing. Of course the newly trained model should have a lower training loss than the previous one.

wenston2006 avatar Apr 01 '16 13:04 wenston2006

I also found that the test output error during training would be kept at a value after 30,000 iterations, so I just do training for around 200,000 iterations. Should I keep doing training for 1000,000 iterations as the default setting in prototxt files?

wenston2006 avatar Apr 01 '16 13:04 wenston2006

Moreover ,can you send me one copy of your Face_alignment.pdf ?I failed to open this file.

wenston2006 avatar Apr 01 '16 14:04 wenston2006

The model I trained may use iters 1000,000 or 2000,000. The loss will still reduce when training goes on. Maybe you should plot the loss to see if it is.

for the pdf file, maybe something wrong with github contents, it should be the network error. I upload the pdf to Baidu Driver.

luoyetx avatar Apr 02 '16 04:04 luoyetx