tommyshelby4
tommyshelby4
Where could I possibly find the camera parameters (intri.yml and extri.yml) for the monocular video demos demonstrated here https://chingswy.github.io/easymocap-public-doc/quickstart/quickstart.html? Thanks in advance.
Hello everybody and congratulations on your amazing work. Could somebody confirm that **gen_wts_yoloV5.py** is replaced by **export_yolo_V5.py**?
Could somebody explicitly tell what the coordinate system of SMPL is? Forward, up and right vector?
When running the monocular demo, the output matrix containing the SMPL pose parameters has a dimension of 75, while in the multicamera setup (**mv1p**) the output vector dimension is 78....
Is it possible that during the postprocess I somehow get joint angles instead of 3D keypoints of SMPL joint positions?
Is there a way I can obtain 3D keypoints instead of 2D keypoints in the monocular demo paradigm? Thanks in advance.
Is there any case that the 3D joint positions provided by the dataset are local transforms instead of global 3D positions?
If I want to use the model just for inference and not from training, is the dataset generation necessary?
In the denoising mode, the input I get has dimension **(num_frames, 63)** which seems weird to me bearing in mind the PoseNDF uses quaternion representation as input. Shouldn't it be...