pose-tensorflow-detailed icon indicating copy to clipboard operation
pose-tensorflow-detailed copied to clipboard

Human Pose estimation with TensorFlow framework

Human Pose Estimation with TensorFlow

2018-3-19更新日志

这大兄弟在玩 AI challenger人体骨骼关节点赛题的时候,同样自己训练并开源出来。 但是,该比赛的关键点只有:14个(参考:赛题与数据),该作者在生成时候(ai2coco_art_neckhead_json.py),拼凑成17个点,与coco一致,然后就可以完全使用coco框架训练(多人模式),同时共享pairwise stat。 该作者在比赛数据上当时迭代了60W次,最终的得分为:0.36,而原来的coco数据集,多人关键点定位需要180W次。

# right_eye
cocokey[6:9] = [0, 0, 0]
# left_ear
cocokey[9:12] = [0, 0, 0]
# right_ear
cocokey[12:15] = [0, 0, 0]

Here you can find the implementation of the Human Body Pose Estimation algorithm, presented in the ArtTrack and DeeperCut papers:

Eldar Insafutdinov, Leonid Pishchulin, Bjoern Andres, Mykhaylo Andriluka and Bernt Schiele DeeperCut: A Deeper, Stronger, and Faster Multi-Person Pose Estimation Model. In European Conference on Computer Vision (ECCV), 2016

Eldar Insafutdinov, Mykhaylo Andriluka, Leonid Pishchulin, Siyu Tang, Evgeny Levinkov, Bjoern Andres and Bernt Schiele ArtTrack: Articulated Multi-person Tracking in the Wild. In Conference on Computer Vision and Pattern Recognition (CVPR), 2017

For more information visit http://pose.mpi-inf.mpg.de

Python 3 is required to run this code. First of all, you should install TensorFlow as described in the official documentation. We recommended to use virtualenv.

You will also need to install the following Python packages:

$ pip3 install scipy scikit-image matplotlib pyyaml easydict cython munkres

When running training or prediction scripts, please make sure to set the environment variable TF_CUDNN_USE_AUTOTUNE to 0 (see this ticket for explanation).

If your machine has multiple GPUs, you can select which GPU you want to run on by setting the environment variable, eg. CUDA_VISIBLE_DEVICES=0.

Training models

  • coco(多人场景)/MPII(单人场景)数据集训练教程:README.md
  • 自己数据集训练教程: SelfTraining.md

Demo code

Single-Person (if there is only one person in the image)

# Download pre-trained model files
$ cd models/mpii
$ ./download_models.sh
$ cd -

# Run demo of single person pose estimation
$ TF_CUDNN_USE_AUTOTUNE=0 python3 demo/singleperson.py

Multiple People

# Compile dependencies
$ ./compile.sh

# Download pre-trained model files
$ cd models/coco
$ ./download_models.sh
$ cd -

# Run demo of multi person pose estimation
$ TF_CUDNN_USE_AUTOTUNE=0 python3 demo/demo_multiperson.py

Citation

Please cite ArtTrack and DeeperCut in your publications if it helps your research:

@inproceedings{insafutdinov2017cvpr,
    title = {ArtTrack: Articulated Multi-person Tracking in the Wild},
    booktitle = {CVPR'17},
    url = {http://arxiv.org/abs/1612.01465},
    author = {Eldar Insafutdinov and Mykhaylo Andriluka and Leonid Pishchulin and Siyu Tang and Evgeny Levinkov and Bjoern Andres and Bernt Schiele}
}

@article{insafutdinov2016eccv,
    title = {DeeperCut: A Deeper, Stronger, and Faster Multi-Person Pose Estimation Model},
    booktitle = {ECCV'16},
    url = {http://arxiv.org/abs/1605.03170},
    author = {Eldar Insafutdinov and Leonid Pishchulin and Bjoern Andres and Mykhaylo Andriluka and Bernt Schiele}
}