FaceVerification
FaceVerification copied to clipboard
About DeepID2
hi, Recently, I still try to implement the DeepID model from DeepID1 to DeepID2, however, I have implemented your DeepID1 model, but the DeepID2 model is always incorrect, the model is not only non-convergent but also wrong. So, I wonder if you still research the DeepID2 model and have you made some new progress?
Recently I am working on face alignment. As a friend of mine said, he successfully trained a DeepID2 model, by setting the loss_weight of contrastive loss to a very low value, say 1e-5. You may try it again.
@happynear Thank you very much! I will try it again!!
@happynear By the way, the margin in the contrastiveloss has great effect on the loss results, how to set this value, and besides, can your friends share the net structure with me? Thanks very much!
As the DeepID2 paper described, using same pair only can already get accuracy very close to the best performance. In this case, we can generate pairs of images of same label, and set the margin to 0.
@happynear I have another question, does your friend construct the DeepID2 model just the same as you provided, if not, can you please tell me the difference or key points.Besides, I also use webface to train the model, but I modify the data_layer.cpp to generate the pair images lmdb datasets(image1 image2 label1 label2) and then use slice layer to slice the pair image to image1 and image2 and run two convnets, almost same as your model, does this manner right or not? if not, how do you generate the pair images?
- His net is the same with mine, and is described in the CASIA-webface paper.
- I use matlab script to generate the list of image pairs. They are in
./dataset
folder of this repo.
@happynear Thank you very much! I wll do this process again!
@happynear hi, as in webface datasets is very unbalanced, it has 10575 persons, but the amount of same person is small, have you made some stategies to make the generated pair images balanced such as the probability of been the same person or the different person is almost equal?
Face++ has done a experiment. The conclusion is to delete all identity with less than 10 images.
Paper: http://arxiv.org/pdf/1501.04690.pdf .
@happynear Hi, I found a problem during generate pair images using your matlab scripts, I want to generate only the same pairs and set the same_r=1,but this can not generate corresponding train.txt, if i set the same_r=0.5 and it works, how to solve this problem. My datasets is webface.
This is really a messy code. Just try more times....
In fact, for same pair generation, I suggest you to use a full permutation, i. e. generate all possible pairs. Since there are not too many images in a single identity, the final number will not be too large.
Moreover, you can extract a pre-trained feature first and mine the hard paris only, then fine-tune the pre-trained network with DeepID2.
@happynear I am not very clearly about fine-tuning the pre-trained model with DeepID2, you mean that i can use any one pre-trained model as the initialization of DeepID2?
Sure, DeepID1 model and DeepID2 model can share the same trained parameters.
@happynear I found a problem that, i can generate two train.txt with all the same pairs, then using caffe to convert them to lmdb datasets, however, if the training process random pick two images every time but not in order, the two images must be the different class, how to deal this problem? Thank you!
About the DeepID2, in your implementation you use:
"layer { name: "simloss" type: "ContrastiveLoss" loss_weight: 0.00001 contrastive_loss_param { margin: 0 } bottom: "pool5" bottom: "pool5_p" bottom: "label1" bottom: "label2" top: "simloss" }"
You use 4 blobs as bottom, but in the docs from caffe the constrative loss only use 3 (the third parameter is the similarity, it value should be 0 if differs or 1 if simmilar), so as I am reading, your similarity parameter is based only in the label1 which I think is totally wrong. Is this correct? Or you changed something else in the code?
Thank you, and by the way good work!
@dfdf The contrastiveloss sure can only accept 3 blobs as input, however, in the DeepID2 , as it simultaneous use identification and verification imformation to training the net, it must use each image's label which lead to we can not just use the similarity of pair data(0 or 1) to the contrastive loss, so in order to use the pair label, we must modify the contrastiveloss layer to accept four blobs as input, the modify of contrastive loss can refer to the author's windows caffe.
@dfdf As @denghuiru said, I have modified contrastive loss layer to get a more elegant implementation. The modified layer is in my caffe windows repository https://github.com/happynear/caffe-windows/blob/master/src/caffe/layers/contrastive_loss_layer.cpp .
@happynear hi, I recently follow your suggestion to do the deepID2, using all the same pairs and set the loss weight to 1*10-5, and using a small datasets only has 9000 pairs, however,during the training process, the contrastive loss become very small and nearly has no contribute to the total loss, is this right? 2. Besides, how about the accuracy dose your friends get based on deepid2 model and how does he calculate the face verification accuracy? 3. In deepid2, how to check whether the two loss(contrastive loss and softmaxloss) adjust the parameter together, i always think the process of adjust parameter is wrong! Thank you!
@denghuiru What are the values that you are using on the solver? Thank You
@dfdf The author has provided the solver in the caffe_proto and i also use his verison!
@denghuiru you got any improvements doing what you suggest? You were able to make the network to converge, using a valid constrative loss?
@dfdf I am just follow the author's suggestion to doing this model and have not made some impressive results, i think in order to fully implement this model still need a long time! However, if you made any progress please contact me anytime!
@denghuiru Yeah, sure. I am still trying to converge the network on training, but if I got any progress I will post here
@dfdf Thank you very much, and i am also trying to converge the network, but has not made much progress!
@denghuiru
I got some ideas from a discussion list in the caffe users.
I will try to do the follow:
I created 3 diferents prototxt and try to execute them only changing the value o constrative loss weight and the learning rate
1 - Constrative loss = 0 and Learning Rate: 0.01 2 - Learning Rate: 0.001", "Constrative: 0.00032" 3 - Learning Rate: 0.0001","Constrative: 0.006"
I will try for 30~50k iterations until I change the parameters. It will take a while to train this, but I will post here what I can achiev.
If you have any progress in the mean time, feel free to warn me. Thanks
@dfdf At present, I have generate the datasets following the authors scripts, and run the model, the learning rate is 0.01 and the contrastiveloss weight is 0.00001, however i found the softmaxloss is always about 9.0 and dosen't decrease, i still try to found why this happen, have you meet this problems?
By the way, if we could train the DeepID2 (my network are still training now), how do we get the output vector for one image? Should we get the combination of dropout5 + dropout5_p? Or we choose one of them as the deepid output?
@dfdf Once you get the trained model, you can extract the feature of dropout5 only, it is 320 dimension, then do face verification. By the way, will you share your model with me? does your model converge correctly?
@happynear I found a problem with your caffe version, I am using your windows version (caffe repository) but it seens that you don't have the last version for the constrative loss, for example this bug is not fixed in your version: https://github.com/nickcarlevaris/caffe/commit/7e2fceb1e91cfe48eddb3569e29aaef4b9ca1a2a
Maybe that is the reason for some problems to converge the network