ROMP
ROMP copied to clipboard
Monocular, One-stage, Regression of Multiple 3D People and their 3D positions & trajectories in camera & global coordinates. ROMP[ICCV21], BEV[CVPR22], TRACE[CVPR2023]
Monocular, One-stage, Regression of Multiple 3D People
![]() |
![]() |
|---|---|
| ROMP is a one-stage method for monocular multi-person 3D mesh recovery in real time. | BEV further explores multi-person depth relationships and supports all age groups. |
| [Paper] [Video] | [Project Page] [Paper] [Video] [RH Dataset] |
![]() |
![]() |
We provide cross-platform API (installed via pip) to run ROMP & BEV on Linux / Windows / Mac.
Table of contents
- Table of contents
- News
- Getting started
- Installation
- Try on Google Colab
- How to use it
- Please refer to this guidance for inference & export (fbx/glb/bvh).
- Train
- Evaluation
- Docker usage
- Bugs report
- Citation
- Acknowledgement
News
2022/06/21: Training & evaluation code of BEV is released. Please update the model_data.
2022/05/16: simple-romp v1.0 is released to support tracking, calling in python, exporting bvh, and etc.
2022/04/14: Inference code of BEV has been released in simple-romp v0.1.0.
2022/04/10: Adding onnx support, with faster inference speed on CPU/GPU.
Old logs
Getting started
Please use simple-romp for inference, the rest code is just for training.
Installation
pip install --upgrade setuptools numpy cython
pip install --upgrade simple-romp
For more details, please refer to install.md.
Try on Google Colab
It allows you to run the project in the cloud, free of charge. Google Colab demo.
How to use it
Please refer to this guidance for inference & export (fbx/glb/bvh).
Train
For training, please refer to installation.md for full installation. Please prepare the training datasets following dataset.md, and then refer to train.md for training.
Evaluation
Please refer to romp_evaluation.md and bev_evaluation.md for evaluation on benchmarks.
Docker usage
Please refer to docker.md
Bugs report
Welcome to submit issues for the bugs.
Citation
@InProceedings{BEV,
author = {Sun, Yu and Liu, Wu and Bao, Qian and Fu, Yili and Mei, Tao and Black, Michael J},
title = {Putting People in their Place: Monocular Regression of 3D People in Depth},
booktitle = {CVPR},
year = {2022}}
@InProceedings{ROMP,
author = {Sun, Yu and Bao, Qian and Liu, Wu and Fu, Yili and Michael J., Black and Mei, Tao},
title = {Monocular, One-stage, Regression of Multiple 3D People},
booktitle = {ICCV},
year = {2021}}
Acknowledgement
We thank all contributors for their help!
This work was supported by the National Key R&D Program of China under Grand No. 2020AAA0103800.
Disclosure: MJB has received research funds from Adobe, Intel, Nvidia, Facebook, and Amazon and has financial interests in Amazon, Datagen Technologies, and Meshcapade GmbH. While he was part-time at Amazon during this project, his research was performed solely at Max Planck.



