mmhuman3d
mmhuman3d copied to clipboard
how to inference hybrik model in demo?
when I input like this: python demo/estimate_smpl.py configs/hybrik/resnet34_hybrik_mixed.py models/resnet34_hybrik_mixed-a61b3c9c_20220211.pth --single_person_demo --det_config demo/mmdetection_cfg/faster_rcnn_r50_fpn_coco.py --det_checkpoint https://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth --input_path tmp.mp4 --show_path vis_results/single_person_demo.mp4 --output demo_result --smooth_type savgol --speed_up_type deciwatch --draw_bbox

Hi @Gzzgz , currently we do not support HybrIK in demo. @ttxskk may provide more details.
OK, THX.
Hi @Gzzgz, for HybrIK inference, the official code has released a demo version that explicitly predicts the camera. We will add this version into MMHuman3D if the author releases all the code and data.