mmhuman3d icon indicating copy to clipboard operation
mmhuman3d copied to clipboard

Not natural pose around hands?

Open SizheAn opened this issue 2 years ago • 13 comments

Hi,

I've been trying to reconstruct smpl mesh from my own dataset. I found the correspondences and created my own .py file for the dataset. Tested the smplify.py (using smplify3d config) on some of the data. Here are some of the result:

keypoint: movement2 Reconstructed SMPL: https://user-images.githubusercontent.com/87412724/163101313-e36b7bda-258f-4d89-bed9-506f905aacc3.mp4

keypoint: movement3 Reconstructed SMPL: https://user-images.githubusercontent.com/87412724/163101456-e771c9dd-5c52-4b09-a712-d22ef6e0da37.mp4

keypoint: movement4 Reconstructed SMPL: https://user-images.githubusercontent.com/87412724/163101482-29599056-07e4-4ec3-b314-5c8d4df9eaf2.mp4

Overall it works pretty well. However, the pose around hands are not natural to me given that I know the ground truth pose is not like this. In the first example, the model basically gives a hand pose that is not practical in real life for the right hand, but for the left hand (the still hand) looks pretty legit. Another example, in the example three, both palms should be towards the head, not the other way around. It looks like the SMPL model gives the right Swing but the incorrect Twist. Is Smplify itself using inverse kinematic to avoid the unnatural pose? Or do you recommend any other methods to solve this issue?

Also, in my keypoints set, I have wrist, hand, thumb, and middle finger keypoints for each hand. However, I found that only using the wrist, and using all four almost give the same result. This is the other thing that I don't understand. Does that mean the model cannot extract any useful information from any other keypoints but the wrist ones? Thanks a lot!

SizheAn avatar Apr 13 '22 04:04 SizheAn

Hi @SizheAn

It looks like the SMPL model gives the right Swing but the incorrect Twist

This is a limitation of SMPLify as there is no keypoint to constraint the twist of the hands. Since you have additional hand keypoints, have you tried SMPLify-X instead (by changing the config)? You may also want to adjust the hand_weight in the config file. The configs are in this folder.

caizhongang avatar Apr 13 '22 05:04 caizhongang

Hi @SizheAn

It looks like the SMPL model gives the right Swing but the incorrect Twist

This is a limitation of SMPLify as there is no keypoint to constraint the twist of the hands. Since you have additional hand keypoints, have you tried SMPLify-X instead (by changing the config)? You may also want to adjust the hand_weight in the config file. The configs are in this folder.

Hi Zhongang,

In https://github.com/open-mmlab/mmhuman3d/blob/main/docs/getting_started.md, for smpl, we need to change the model name. How about for smplx? Do I also need to rename it? If so, what's the correct name?

SizheAn avatar Apr 13 '22 14:04 SizheAn

Another question:

I see that there are three kinds of loss when optimizing the smplify model: keypoints3d_loss, smooth_loss, and pose_reg_loss. But it looks different from the five loss components in the original paper: image

Any explanation for this? Or any supplement material recommended to read? Thanks!

SizheAn avatar Apr 13 '22 14:04 SizheAn

Hi @SizheAn

It looks like the SMPL model gives the right Swing but the incorrect Twist

This is a limitation of SMPLify as there is no keypoint to constraint the twist of the hands. Since you have additional hand keypoints, have you tried SMPLify-X instead (by changing the config)? You may also want to adjust the hand_weight in the config file. The configs are in this folder.

Hi Zhongang,

In https://github.com/open-mmlab/mmhuman3d/blob/main/docs/getting_started.md, for smpl, we need to change the model name. How about for smplx? Do I also need to rename it? If so, what's the correct name?

The original file folder smplx downloaded from the official site works. No renaming is needed.

SizheAn avatar Apr 13 '22 15:04 SizheAn

Hi @SizheAn

It looks like the SMPL model gives the right Swing but the incorrect Twist

This is a limitation of SMPLify as there is no keypoint to constraint the twist of the hands. Since you have additional hand keypoints, have you tried SMPLify-X instead (by changing the config)? You may also want to adjust the hand_weight in the config file. The configs are in this folder.

Hi Zhongang,

I tried running with the smplify-x config. However, I'm facing an issue:

File "smplify_customize.py", line 149, in main pred = smplify_output['joints'].cpu().numpy() KeyError: 'joints'

Looks like the smplify_output returned from smplifyx config doesn't have the 'joints' keywords. I further verify it:

This is the 'smplify_output' when config file is smplify3d.py: image

This is the 'smplify_output' when config file is smplifyx.py: e70eb825f3402fced0e167de8edf1ea

As we can see, there is no 'joints' key. Btw, in https://github.com/open-mmlab/mmhuman3d/blob/6a77986759d50f6bb9e9a45724d8436b216cc5fc/tools/smplify.py#L142, return_joints is True for both tests. Please let me know if I'm doing something wrong. Thanks a lot!

SizheAn avatar Apr 13 '22 22:04 SizheAn

Hi @SizheAn

It looks like the SMPL model gives the right Swing but the incorrect Twist

This is a limitation of SMPLify as there is no keypoint to constraint the twist of the hands. Since you have additional hand keypoints, have you tried SMPLify-X instead (by changing the config)? You may also want to adjust the hand_weight in the config file. The configs are in this folder.

Hi Zhongang,

I tried running with the smplify-x config. However, I'm facing an issue:

File "smplify_customize.py", line 149, in main pred = smplify_output['joints'].cpu().numpy() KeyError: 'joints'

Looks like the smplify_output returned from smplifyx config doesn't have the 'joints' keywords. I further verify it:

This is the 'smplify_output' when config file is smplify3d.py: image

This is the 'smplify_output' when config file is smplifyx.py: e70eb825f3402fced0e167de8edf1ea

As we can see, there is no 'joints' key. Btw, in

https://github.com/open-mmlab/mmhuman3d/blob/6a77986759d50f6bb9e9a45724d8436b216cc5fc/tools/smplify.py#L142

, return_joints is True for both tests. Please let me know if I'm doing something wrong. Thanks a lot!

Changed https://github.com/open-mmlab/mmhuman3d/blob/fb5ad0f7706da449f8b4fdc1db8314f44255e745/mmhuman3d/models/registrants/smplifyx.py#L133-L145 to

        # collate results
        ret = {
            'global_orient': global_orient,
            'transl': transl,
            'body_pose': body_pose,
            'betas': betas,
            'left_hand_pose': left_hand_pose,
            'right_hand_pose': right_hand_pose,
            'expression': expression,
            'jaw_pose': jaw_pose,
            'leye_pose': leye_pose,
            'reye_pose': reye_pose
        }

        if return_verts or return_joints or \
                return_full_pose or return_losses:
            eval_ret = self.evaluate(
                betas=betas,
                body_pose=body_pose,
                global_orient=global_orient,
                transl=transl,
                left_hand_pose=left_hand_pose,
                right_hand_pose=right_hand_pose,
                expression=expression,
                jaw_pose=jaw_pose,
                leye_pose=leye_pose,
                reye_pose=reye_pose,
                keypoints2d=keypoints2d,
                keypoints2d_conf=keypoints2d_conf,
                keypoints3d=keypoints3d,
                keypoints3d_conf=keypoints3d_conf,
                return_verts=return_verts,
                return_full_pose=return_full_pose,
                return_joints=return_joints,
                reduction_override='none'  # sample-wise loss
            )

            if return_verts:
                ret['vertices'] = eval_ret['vertices']
            if return_joints:
                ret['joints'] = eval_ret['joints']
            if return_full_pose:
                ret['full_pose'] = eval_ret['full_pose']
            if return_losses:
                for k in eval_ret.keys():
                    if 'loss' in k:
                        ret[k] = eval_ret[k]

        for k, v in ret.items():
            if isinstance(v, torch.Tensor):
                ret[k] = v.detach().clone()

        return ret

to make it fully compatible with the smplify.py. Seems correct to me. Please take a look at it and confirm with me. (Although the global orient has some problems. But will keep you posted once I fix it.)

Even using the example data, human_data sample also gives bad result when using smplifyx config. This is the mmhuman3d_smplify_a009.mp4 file I generated: https://user-images.githubusercontent.com/87412724/163298669-f3eebef3-4924-43b3-9b97-7312b266edd8.mp4 Please take a look at this.

SizheAn avatar Apr 13 '22 22:04 SizheAn

Changed

https://github.com/open-mmlab/mmhuman3d/blob/fb5ad0f7706da449f8b4fdc1db8314f44255e745/mmhuman3d/models/registrants/smplifyx.py#L133-L145

to

        # collate results
        ret = {
            'global_orient': global_orient,
            'transl': transl,
            'body_pose': body_pose,
            'betas': betas,
            'left_hand_pose': left_hand_pose,
            'right_hand_pose': right_hand_pose,
            'expression': expression,
            'jaw_pose': jaw_pose,
            'leye_pose': leye_pose,
            'reye_pose': reye_pose
        }

        if return_verts or return_joints or \
                return_full_pose or return_losses:
            eval_ret = self.evaluate(
                betas=betas,
                body_pose=body_pose,
                global_orient=global_orient,
                transl=transl,
                left_hand_pose=left_hand_pose,
                right_hand_pose=right_hand_pose,
                expression=expression,
                jaw_pose=jaw_pose,
                leye_pose=leye_pose,
                reye_pose=reye_pose,
                keypoints2d=keypoints2d,
                keypoints2d_conf=keypoints2d_conf,
                keypoints3d=keypoints3d,
                keypoints3d_conf=keypoints3d_conf,
                return_verts=return_verts,
                return_full_pose=return_full_pose,
                return_joints=return_joints,
                reduction_override='none'  # sample-wise loss
            )

            if return_verts:
                ret['vertices'] = eval_ret['vertices']
            if return_joints:
                ret['joints'] = eval_ret['joints']
            if return_full_pose:
                ret['full_pose'] = eval_ret['full_pose']
            if return_losses:
                for k in eval_ret.keys():
                    if 'loss' in k:
                        ret[k] = eval_ret[k]

        for k, v in ret.items():
            if isinstance(v, torch.Tensor):
                ret[k] = v.detach().clone()

        return ret

to make it fully compatible with the smplify.py. Seems correct to me. Please take a look at it and confirm with me. (Although the global orient has some problems. But will keep you posted once I fix it.)

Hi @SizheAn , thank you very much for the fix, would you mind making a contribution to MMHuman3D with a Pull Request?

caizhongang avatar Apr 22 '22 08:04 caizhongang

Even using the example data, human_data sample also gives bad result when using smplifyx config. This is the mmhuman3d_smplify_a009.mp4 file I generated: https://user-images.githubusercontent.com/87412724/163298669-f3eebef3-4924-43b3-9b97-7312b266edd8.mp4 Please take a look at this.

I think there are two reasons:

  1. As the input keypoints are from Kinect (I guess), is the quality of the keypoints good enough?
  2. We also find tuning SMPLify-X is not trivial, currently we are working on adding more losses in MMHuman3D, will keep you posted on this.

caizhongang avatar Apr 22 '22 08:04 caizhongang

Another question:

I see that there are three kinds of loss when optimizing the smplify model: keypoints3d_loss, smooth_loss, and pose_reg_loss. But it looks different from the five loss components in the original paper: image

Any explanation for this? Or any supplement material recommended to read? Thanks!

For the losses, we have implemented

We hope to include interpentration loss from SMPLify-X as soon as possible.

caizhongang avatar Apr 22 '22 08:04 caizhongang

Even using the example data, human_data sample also gives bad result when using smplifyx config. This is the mmhuman3d_smplify_a009.mp4 file I generated: https://user-images.githubusercontent.com/87412724/163298669-f3eebef3-4924-43b3-9b97-7312b266edd8.mp4 Please take a look at this.

I think there are two reasons:

  1. As the input keypoints are from Kinect (I guess), is the quality of the keypoints good enough?
  2. We also find tuning SMPLify-X is not trivial, currently we are working on adding more losses in MMHuman3D, will keep you posted on this.

No, this video mmhuman3d_smplify_a009.mp4 was not generated from the kinect, it was an example from here (human_data): https://github.com/open-mmlab/mmhuman3d/pull/102. To check if my fix was correct, I use the example data human_data_tri_a009.npz as @yl-1993 uploaded there. And It shows that the result from smplifyx config it's not promising. Any thoughts? I don't think I should do a pull request before the example looks good.

SizheAn avatar Apr 22 '22 20:04 SizheAn

We are finetuning the parameters for SMPLify, as you can see smplify3d.py (an improved config if 3D keypoints are available) is quite different from smplify.py. The performance is sensitive to various hyperparameters (fitting stages, loss weights, etc).

For SMPL-X fitting, we have yet to conduct an extensive hyperparameter finetuing. But to reproduce a similar body of SMPL-X as that in #102, I guess trying a new smplifyx config that is identical with smplify3d.py for the body part without any hand losses may be helpful. If this approach works for the body, we may and an additional stage for hands.

Another thing is that you may wish to use either 2D keypoints or 3D keypoints, but not both (you can do this by removing unwanted losses in the config). And if 3D keypoints are available, we prefer them over the 2D keypoints.

caizhongang avatar Apr 23 '22 12:04 caizhongang

We are finetuning the parameters for SMPLify, as you can see smplify3d.py (an improved config if 3D keypoints are available) is quite different from smplify.py. The performance is sensitive to various hyperparameters (fitting stages, loss weights, etc).

For SMPL-X fitting, we have yet to conduct an extensive hyperparameter finetuing. But to reproduce a similar body of SMPL-X as that in #102, I guess trying a new smplifyx config that is identical with smplify3d.py for the body part without any hand losses may be helpful. If this approach works for the body, we may and an additional stage for hands.

Another thing is that you may wish to use either 2D keypoints or 3D keypoints, but not both (you can do this by removing unwanted losses in the config). And if 3D keypoints are available, we prefer them over the 2D keypoints.

Hi Zhongang,

Sent an email to your sensetime email regarding some questions. It would be great if you could check it out when you have time :) Thanks!

SizheAn avatar May 31 '22 05:05 SizheAn

Hello,

I'm encountering an issue in passing 2D keypoints data to the smplify.py script. I've generated the human_data using the OpenPose25 2D data format. When loading the human_data.npz file with the human_data = HumanData.fromfile(args.input), I set args.input_type to 'keypoints2d'.

However, when doing this, I'm facing an unexpected error. Could you please provide guidance on the correct way to input the 2D keypoints to smplify.py using the OpenPose25 data format?

Kindly find the error message attached below:

I'm looking forward to your assistance. Thank you.

/mmhuman3d/mmhuman3d/models/losses/mse_loss.py:24: UserWarning: Using a target size (torch.Size([1, 45, 3])) that is different to the input size (torch.Size([1, 45, 2])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size. loss = F.mse_loss(pred, target, reduction='none') Traceback (most recent call last): File "tools/smplify.py", line 198, in main() File "tools/smplify.py", line 164, in main smplify_output = smplify(**human_data, return_joints=True) File "/mmhuman3d/mmhuman3d/models/registrants/smplify.py", line 223, in call self._optimize_stage( File "/mmhuman3d/mmhuman3d/models/registrants/smplify.py", line 377, in _optimize_stage loss = optimizer.step(closure) File "/usr/local/lib/python3.8/dist-packages/torch/optim/optimizer.py", line 89, in wrapper return func(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/torch/optim/lbfgs.py", line 311, in step orig_loss = closure() File "/usr/local/lib/python3.8/dist-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "/mmhuman3d/mmhuman3d/models/registrants/smplify.py", line 354, in closure loss_dict = self.evaluate( File "/mmhuman3d/mmhuman3d/models/registrants/smplify.py", line 461, in evaluate loss_dict = self._compute_loss( File "/mmhuman3d/mmhuman3d/models/registrants/smplify.py", line 564, in _compute_loss keypoint2d_loss = self.keypoints2d_mse_loss( File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/mmhuman3d/mmhuman3d/models/losses/mse_loss.py", line 167, in forward loss = loss_weight * mse_loss_with_gmof( File "/mmhuman3d/mmhuman3d/models/losses/utils.py", line 95, in wrapper loss = loss_func(pred, target, **kwargs) File "/mmhuman3d/mmhuman3d/models/losses/mse_loss.py", line 24, in mse_loss_with_gmof loss = F.mse_loss(pred, target, reduction='none') File "/usr/local/lib/python3.8/dist-packages/torch/nn/functional.py", line 2925, in mse_loss expanded_input, expanded_target = torch.broadcast_tensors(input, target) File "/usr/local/lib/python3.8/dist-packages/torch/functional.py", line 74, in broadcast_tensors return _VF.broadcast_tensors(tensors) # type: ignore RuntimeError: The size of tensor a (2) must match the size of tensor b (3) at non-singleton dimension 2

yqtl avatar Jul 19 '23 12:07 yqtl