DeepAlignmentNetwork icon indicating copy to clipboard operation
DeepAlignmentNetwork copied to clipboard

some questions about these pre-trainedmodels

Open Liz66666 opened this issue 6 years ago • 9 comments

Hi, I have download these three models: DAN.npz, DAN-Menpo.npz, DAN-menpo-tracking.npz, but I don't know the difference between these models. And I have download menpo training dataset, but it needs password to unzip, can you share the password or tell me how to get this?

Liz66666 avatar Jul 05 '18 07:07 Liz66666

Hi, For the password please contact the owners of the menpo dataset: https://ibug.doc.ic.ac.uk/ As for the different models, please take a look at the readme.txt file that is placed in the same directory as the model files. For convenience I am pasting its content below:

The DAN and DAN-Menpo models are the ones used in the following article: Deep Alignment Network: A convolutional neural network for robust face alignment, CVPRW 2017

The DAN-Menpo-tracking model is a single stage model with an additional layer that outputs the confidence of whether the tracking is correct. This allows for detecting when loss of tracking occurs. This model is used in the following article: HoloFace: Augmenting Human-to-Human Interactions on HoloLens, WACV 2018

Please note that all of the models are trained on the 300-W and Menpo datasets, which exclude commercial use. You should contact [email protected] to find out if it's OK for you to use the model files in a commercial product.

Thanks

Marek

MarekKowalski avatar Jul 05 '18 14:07 MarekKowalski

thanks for your reply!

Liz66666 avatar Jul 06 '18 02:07 Liz66666

Hi friends, I cannot train this DAN by theano using GPU on ubuntu 18.04, so please kindly let me know the pre-trained model which is trained by stage 1? (It's mean you trained it after feed forward network + S0, right?) Thanks!

Onotoko avatar Jul 26 '18 00:07 Onotoko

Hi,

Not sure I understand what you are asking for? If you want to use only the first stage of the pretrained models you can initialize the model with nStages set to 1.

Thanks,

Marek

MarekKowalski avatar Jul 30 '18 13:07 MarekKowalski

Hi Marek, Thank you very much, but when I trained model with keras, I did not meet performance like you :(

Onotoko avatar Jul 31 '18 02:07 Onotoko

Hi, one of the things that might help is early stopping of the first stage i.e. do not train the first stage till it overfits but stop training when the error stops updating frequently.

MarekKowalski avatar Aug 02 '18 08:08 MarekKowalski

Hi Marek, I did not make sense with your comment more, but when I used early stopping I got large error(that mean if loss in validation dataset not update frequently, I will stop it).I try the first stage like:

  • forward neural network in your document
  • after that I added the output of forward neural network with S0(initial landmark)
  • I using mse for loss function Do you have any suggestion for me? Thanks!

Onotoko avatar Aug 02 '18 09:08 Onotoko

Hi,

Instead of mse you should use the error described in the paper, this actually makes quite a lot of difference!

Tell me if that improves your error as you expect.

Best regards,

Marek

MarekKowalski avatar Aug 03 '18 15:08 MarekKowalski

Hi, when I run ImageDemo.py always has some mistakes eg:ValueError: mismatch: parameter has shape (256, 2) but value to set has shape (256, 3136)

ahpu2014 avatar May 16 '20 06:05 ahpu2014