ORPose-Depth
ORPose-Depth copied to clipboard
Inference demo and evaluation scripts for the MICCAI-2019 paper "Human Pose Estimation on Privacy-Preserving Low-Resolution Depth Images"
ORPose-Depth
Human Pose Estimation on Privacy-Preserving Low-Resolution Depth Images (MICCAI-2019)
Vinkle Srivastav, Afshin Gangi, Nicolas Padoy
This repository contains the inference demo and evaluation scripts.
Introduction
Getting Started
Installation
You need to have a Anaconda3 installed for the setup. We developed the code on the Ubuntu 16.04, Python 3.7, PyTorch 1.5.1, CUDA 10.1 using the NVIDIA GeForce GTX 1080 Ti GPU.
> sudo apt-get install ffmpeg
> ORPose_Depth=/path/to/ORPose_Depth/repository
> git clone https://github.com/CAMMA-public/ORPose-Depth.git $ORPose_Depth
> cd $ORPose_Depth
> conda create -n orposedepth_env python=3.7
> conda activate orposedepth_env
# install dependencies
# install lateset version of pytorch or choose depending on your cuda environment (needs pytorch > 1.0)
(orposedepth_env)> conda install pytorch torchvision cudatoolkit=10.1 -c pytorch
(orposedepth_env)> conda install -c conda-forge scipy tqdm yacs pycocotools opencv
(orposedepth_env)> conda install jupyterlab
(orposedepth_env)> cd lib && make && cd ..
# download the low resolution images and models
(orposedepth_env)> wget https://s3.unistra.fr/camma_public/github/DepthPose/models.zip
(orposedepth_env)> wget https://s3.unistra.fr/camma_public/github/DepthPose/data.zip
(orposedepth_env)> unzip models.zip
(orposedepth_env)> unzip data.zip
(orposedepth_env)> rm models.zip data.zip
Evaluation on the MVOR dataset
We are providing the following models for the evaluation and demo : DepthPose_80x60 and DepthPose_64x48
(orposedepth_env)> cd $ORPose_Depth
# --use-cpu flag to run the evaluation on the cpu
# To run the evaluation for DepthPose_64x48 model
(orposedepth_env)> python tools/eval_mvor.py --config_file experiments/mvor/DepthPose_64x48.yaml
# To run the evaluation for DepthPose_80x60 model
(orposedepth_env)> python tools/eval_mvor.py --config_file experiments/mvor/DepthPose_80x60.yaml
# or run the script
(orposedepth_env)> cd run && bash eval_depthpose_mvor.sh
You should see the following results after the evaluation
| Model | Head | Shoulder | Hip | Elbow | Wrist | Average |
|---|---|---|---|---|---|---|
| DepthPose_80x60 | 84.3 | 83.8 | 55.3 | 69.9 | 43.3 | 67.3 |
| DepthPose_64x48 | 84.1 | 83.4 | 54.3 | 69.0 | 41.4 | 66.5 |
Demo on the local computer
# open the 'orpose_depth_demo.ipynb' notebook file in jupyter lab
(orposedepth_env)> jupyter lab
Demo on the Google Colab
If you do not have a suitable environment to run the code, then you can also run the evaluation and demo code on the Google Colab.
Try our Colab demo using the notebook we have prepared
Citation
@inproceedings{srivastav2019human,
title={Human Pose Estimation on Privacy-Preserving Low-Resolution Depth Images},
author={Srivastav, Vinkle and Gangi, Afshin and Padoy, Nicolas},
booktitle={International Conference on Medical Image Computing and Computer-Assisted Intervention},
pages={583--591},
year={2019},
organization={Springer}
}
@inproceedings{cao2017realtime,
title = {Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields},
author = {Zhe Cao and Tomas Simon and Shih-En Wei and Yaser Sheikh},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2017}
}
License
This code, models, and datasets are available for non-commercial scientific research purposes as defined in the CC BY-NC-SA 4.0. By downloading and using this code you agree to the terms in the LICENSE. Third-party codes are subject to their respective licenses.
References
- Bipartite graph matching code for keypoint-to-person identification is borrowed from PyTorch_RTPose.
- Evaluation code is from MVOR