pyhowfar
pyhowfar copied to clipboard
pytorch version of face-alignment-training(ICCV2017)
-
Pytorch version of 'How far are we from solving the 2D & 3D Face Alignment problem? (and a dataset of 230,000 3D facial landmarks)'
For official torch7 version please refer to [[https://github.com/1adrianb/face-alignment-training][face-alignment-training]]
This is a reinplement of training code for 2D-FAN and 3D-FAN decribed in "How far" paper. Please visit [[https://www.adrianbulat.com][author's webpage]] or [[https://arxiv.org/abs/1703.07332][arxiv]] for technical details.
Thanks for bearpaw's excellent work on human pose estimation [[https://github.com/bearpaw/pytorch-pose][pytorch-pose]] . And in this project, I reused a branch of helper function from pytorch-pose.
Pretrained models are available soon.
** Requirments
- Install the latest [[http://pytorch.org][PyTorch]], version 0.2.1 is fully supported and there is no further test on older version.
*** Packages
- [[https://www.scipy.org/][scipy]]
- [[https://pytorch.org][torchvision]]
- [[https://pypi.python.org/pypi/progress][progress]](optional) for better visualization.
** Setup
- Clone the github repository and install all the dependencies mentiones above.
#+BEGIN_SRC bash git clone https://github.com/hzh8311/pyhowfar cd pyhowfar #+END_SRC
-
Download the 300W-LP dataset from the authors webpage. In order to train on your own data the dataloader.lua file needs to be adapted.
-
Download the 300W-LP annotations converted to t7 format by paper author from [[https://www.adrianbulat.com/downloads/FaceAlignment/landmarks.zip][here]], extract it and move the
landmarksfolder to the root of the 300W-LP dataset.
** Usage
In order to run the demo please download the required models available bellow and the associated data.
#+BEGIN_SRC bash python main.py #+END_SRC
In order to see all the available options please run:
#+BEGIN_SRC bash python main.py --help #+END_SRC
** What's different?
- Pythoner friendly and there is no need for
.t7format annotations - Add 300-W-LP test set for validation.
- Followed the excatly same training procedure described in the paper (except binary network part).
- Add model evaluation in terms of Mean error, [email protected]
- TODO: add evaluation on test sets (300W, 300VW, AFLW2000-3D etc.).
** Citation
#+BEGIN_SRC bibtex @inproceedings{bulat2017far, title={How far are we from solving the 2D & 3D Face Alignment problem? (and a dataset of 230,000 3D facial landmarks)}, author={Bulat, Adrian and Tzimiropoulos, Georgios}, booktitle={International Conference on Computer Vision}, year={2017} } #+END_SRC