Results 8 comments of Jeonghan Lee

Hi, I tried to run the demo code, but I'm following exactly the same error procedure... May I ask if you've succeeded in running the demo in the past? Thank...

![스크린샷, 2024-04-11 20-21-51](https://github.com/geopavlakos/multishot/assets/162006743/17560238-86cb-44c4-9b7a-7a9676e160f3) The above is the PHALP output value of the first frame of the video I entered.

Now I could run first demo code but, when I try to run the second demo code main.py, I get the error below. ![스크린샷, 2024-04-14 20-40-40](https://github.com/geopavlakos/multishot/assets/162006743/9c573b45-7fc5-4e21-afdd-7f95373d0f06) If you're having trouble...

I've run the Demo code now, and I've even succeeded in seeing the mesh-coated frames based on tracklet = 1. However, I can't get the result I want... I've tried...

else: valid = 0 scale = 1 center = [0, 0] imgname = os.path.join(args.phalp_demo, 'img', track_frame['img_name'][0]) H, W = track_frame['size'][0] pred_pose_aa = np.zeros(72) betas = np.zeros(10) cam_trans = np.zeros(3) torso_joints...

> Hi, I tried to run the demo code, but I'm following exactly the same error procedure... May I ask if you've succeeded in running the demo in the past?...

> Maybe it is because the Human3.6M SMPL ground truth provided by Mosh is no longer available. Reference: https://github.com/nkolot/ProHMR/tree/master/dataset_preprocessing > > > Human3.6M: Unfortunately, due to license limitations, we are...

Hi, 4DHumans uses Pose Transformer V2 for pose prediction, which is different from PHALP. However, I've tried running both the latest PHALP and 4DHumans repositories with AVA datasets, and I...