FaceVerification icon indicating copy to clipboard operation
FaceVerification copied to clipboard

About DeepID2

Open denghuiru opened this issue 9 years ago • 36 comments

hi, Recently, I still try to implement the DeepID model from DeepID1 to DeepID2, however, I have implemented your DeepID1 model, but the DeepID2 model is always incorrect, the model is not only non-convergent but also wrong. So, I wonder if you still research the DeepID2 model and have you made some new progress?

denghuiru avatar Dec 15 '15 06:12 denghuiru

Recently I am working on face alignment. As a friend of mine said, he successfully trained a DeepID2 model, by setting the loss_weight of contrastive loss to a very low value, say 1e-5. You may try it again.

happynear avatar Dec 15 '15 07:12 happynear

@happynear Thank you very much! I will try it again!!

denghuiru avatar Dec 15 '15 08:12 denghuiru

@happynear By the way, the margin in the contrastiveloss has great effect on the loss results, how to set this value, and besides, can your friends share the net structure with me? Thanks very much!

denghuiru avatar Dec 15 '15 08:12 denghuiru

As the DeepID2 paper described, using same pair only can already get accuracy very close to the best performance. In this case, we can generate pairs of images of same label, and set the margin to 0.

happynear avatar Dec 15 '15 10:12 happynear

@happynear I have another question, does your friend construct the DeepID2 model just the same as you provided, if not, can you please tell me the difference or key points.Besides, I also use webface to train the model, but I modify the data_layer.cpp to generate the pair images lmdb datasets(image1 image2 label1 label2) and then use slice layer to slice the pair image to image1 and image2 and run two convnets, almost same as your model, does this manner right or not? if not, how do you generate the pair images?

denghuiru avatar Dec 16 '15 02:12 denghuiru

  • His net is the same with mine, and is described in the CASIA-webface paper.
  • I use matlab script to generate the list of image pairs. They are in ./dataset folder of this repo.

happynear avatar Dec 16 '15 02:12 happynear

@happynear Thank you very much! I wll do this process again!

denghuiru avatar Dec 16 '15 02:12 denghuiru

@happynear hi, as in webface datasets is very unbalanced, it has 10575 persons, but the amount of same person is small, have you made some stategies to make the generated pair images balanced such as the probability of been the same person or the different person is almost equal?

denghuiru avatar Dec 16 '15 03:12 denghuiru

Face++ has done a experiment. The conclusion is to delete all identity with less than 10 images.

Paper: http://arxiv.org/pdf/1501.04690.pdf .

happynear avatar Dec 16 '15 03:12 happynear

@happynear Hi, I found a problem during generate pair images using your matlab scripts, I want to generate only the same pairs and set the same_r=1,but this can not generate corresponding train.txt, if i set the same_r=0.5 and it works, how to solve this problem. My datasets is webface.

denghuiru avatar Dec 17 '15 06:12 denghuiru

This is really a messy code. Just try more times....

happynear avatar Dec 17 '15 06:12 happynear

In fact, for same pair generation, I suggest you to use a full permutation, i. e. generate all possible pairs. Since there are not too many images in a single identity, the final number will not be too large.

Moreover, you can extract a pre-trained feature first and mine the hard paris only, then fine-tune the pre-trained network with DeepID2.

happynear avatar Dec 17 '15 06:12 happynear

@happynear I am not very clearly about fine-tuning the pre-trained model with DeepID2, you mean that i can use any one pre-trained model as the initialization of DeepID2?

denghuiru avatar Dec 17 '15 07:12 denghuiru

Sure, DeepID1 model and DeepID2 model can share the same trained parameters.

happynear avatar Dec 17 '15 08:12 happynear

@happynear I found a problem that, i can generate two train.txt with all the same pairs, then using caffe to convert them to lmdb datasets, however, if the training process random pick two images every time but not in order, the two images must be the different class, how to deal this problem? Thank you!

denghuiru avatar Dec 17 '15 09:12 denghuiru

About the DeepID2, in your implementation you use:

"layer { name: "simloss" type: "ContrastiveLoss" loss_weight: 0.00001 contrastive_loss_param { margin: 0 } bottom: "pool5" bottom: "pool5_p" bottom: "label1" bottom: "label2" top: "simloss" }"

You use 4 blobs as bottom, but in the docs from caffe the constrative loss only use 3 (the third parameter is the similarity, it value should be 0 if differs or 1 if simmilar), so as I am reading, your similarity parameter is based only in the label1 which I think is totally wrong. Is this correct? Or you changed something else in the code?

Thank you, and by the way good work!

dfdf avatar Dec 17 '15 12:12 dfdf

@dfdf The contrastiveloss sure can only accept 3 blobs as input, however, in the DeepID2 , as it simultaneous use identification and verification imformation to training the net, it must use each image's label which lead to we can not just use the similarity of pair data(0 or 1) to the contrastive loss, so in order to use the pair label, we must modify the contrastiveloss layer to accept four blobs as input, the modify of contrastive loss can refer to the author's windows caffe.

denghuiru avatar Dec 18 '15 01:12 denghuiru

@dfdf As @denghuiru said, I have modified contrastive loss layer to get a more elegant implementation. The modified layer is in my caffe windows repository https://github.com/happynear/caffe-windows/blob/master/src/caffe/layers/contrastive_loss_layer.cpp .

happynear avatar Dec 18 '15 01:12 happynear

@happynear hi, I recently follow your suggestion to do the deepID2, using all the same pairs and set the loss weight to 1*10-5, and using a small datasets only has 9000 pairs, however,during the training process, the contrastive loss become very small and nearly has no contribute to the total loss, is this right? 2. Besides, how about the accuracy dose your friends get based on deepid2 model and how does he calculate the face verification accuracy? 3. In deepid2, how to check whether the two loss(contrastive loss and softmaxloss) adjust the parameter together, i always think the process of adjust parameter is wrong! Thank you!

denghuiru avatar Dec 21 '15 08:12 denghuiru

@denghuiru What are the values that you are using on the solver? Thank You

dfdf avatar Dec 21 '15 13:12 dfdf

@dfdf The author has provided the solver in the caffe_proto and i also use his verison!

denghuiru avatar Dec 22 '15 03:12 denghuiru

@denghuiru you got any improvements doing what you suggest? You were able to make the network to converge, using a valid constrative loss?

dfdf avatar Dec 23 '15 15:12 dfdf

@dfdf I am just follow the author's suggestion to doing this model and have not made some impressive results, i think in order to fully implement this model still need a long time! However, if you made any progress please contact me anytime!

denghuiru avatar Dec 25 '15 01:12 denghuiru

@denghuiru Yeah, sure. I am still trying to converge the network on training, but if I got any progress I will post here

dfdf avatar Dec 28 '15 12:12 dfdf

@dfdf Thank you very much, and i am also trying to converge the network, but has not made much progress!

denghuiru avatar Dec 29 '15 01:12 denghuiru

@denghuiru

I got some ideas from a discussion list in the caffe users.

I will try to do the follow:

I created 3 diferents prototxt and try to execute them only changing the value o constrative loss weight and the learning rate

1 - Constrative loss = 0 and Learning Rate: 0.01 2 - Learning Rate: 0.001", "Constrative: 0.00032" 3 - Learning Rate: 0.0001","Constrative: 0.006"

I will try for 30~50k iterations until I change the parameters. It will take a while to train this, but I will post here what I can achiev.

If you have any progress in the mean time, feel free to warn me. Thanks

dfdf avatar Dec 29 '15 12:12 dfdf

@dfdf At present, I have generate the datasets following the authors scripts, and run the model, the learning rate is 0.01 and the contrastiveloss weight is 0.00001, however i found the softmaxloss is always about 9.0 and dosen't decrease, i still try to found why this happen, have you meet this problems?

denghuiru avatar Jan 04 '16 02:01 denghuiru

By the way, if we could train the DeepID2 (my network are still training now), how do we get the output vector for one image? Should we get the combination of dropout5 + dropout5_p? Or we choose one of them as the deepid output?

dfdf avatar Jan 04 '16 14:01 dfdf

@dfdf Once you get the trained model, you can extract the feature of dropout5 only, it is 320 dimension, then do face verification. By the way, will you share your model with me? does your model converge correctly?

denghuiru avatar Jan 05 '16 01:01 denghuiru

@happynear I found a problem with your caffe version, I am using your windows version (caffe repository) but it seens that you don't have the last version for the constrative loss, for example this bug is not fixed in your version: https://github.com/nickcarlevaris/caffe/commit/7e2fceb1e91cfe48eddb3569e29aaef4b9ca1a2a

Maybe that is the reason for some problems to converge the network

dfdf avatar Jan 11 '16 14:01 dfdf