ROMP
ROMP copied to clipboard
Monocular, One-stage, Regression of Multiple 3D People and their 3D positions & trajectories in camera & global coordinates. ROMP[ICCV21], BEV[CVPR22], TRACE[CVPR2023]
Hello,author! I'm learning romp.I reconstructed a human 3D model from the image. Now I find a two-dimensional coordinate point on the person in the image. I want to know how...
Hello! Firstly, greatly appreciate all the work you have put into this project. I would have two questions for you: 1. I want to import an .fbx into Unity but...
训练时间
请问作者使用4张p40gpu训练完整的数据集需要多久?
Hi, I have difficulty training for the 6 dataset(mpiinf, coco, mpii, lsp, muco, crowdpose). The training code can run successfully for a period of time (not more than some epoch)...
作者您好,您提供的ROMP_HRNet32_V1.pkl是否是通过v1.yml在pretrain_hrnet.pkl的基础上训练得到的吗?
Hi, I read your code for Panoptic evaluation and want to make sure that the mesh parameter map information is fetched directly from the GT and not from the center...
关于数据集的问题
您提供的h36m数据集是311124张图片用来训练,106648张用来测试,在images文件夹中一共是417772张图片。 Q1:human3.6m的[paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6682899&tag=1)中Fig2(a)中可见,train、val、test一共是3640788张图片。请问您的417772张图片是如何选取和划分的? Q2:human3.6m的[官网的overview](http://vision.imar.ro/human3.6m/description.php)中看到,只有带衣服的mesh没有不带衣服的SMPL mesh,请问您的mesh数据是如何生成的?
[myv2_train_pretrained_hrnet_h36m,mpiinf,coco,mpii,lsp,muco,crowdpose_g0,1.log](https://github.com/Arthur151/ROMP/files/8147875/myv2_train_pretrained_hrnet_h36m.mpiinf.coco.mpii.lsp.muco.crowdpose_g0.1.log) Q1:训练到一半的时候,由于OOM或其他原因,导致训练终止,如何设置参数,使得可以继续训练? Q2:即使成功训练完成120个epoch,如果发现loss还没有收敛,那么如何继续设置参数,使得其继续训练更多的epoch?
I haven't read the source code yet. I ran the demo, and found that the head pose estimation is not supported, which confirmed in the results of the experimental section...
https://github.com/Arthur151/ROMP/blob/5fb833d15d11f94b2f62d9c65619eedc85c40d3f/romp/lib/dataset/preprocess/h36m_extract_frames.py#L19 https://github.com/Arthur151/ROMP/blob/5fb833d15d11f94b2f62d9c65619eedc85c40d3f/romp/lib/dataset/preprocess/h36m_extract_frames.py#L33