smplx icon indicating copy to clipboard operation
smplx copied to clipboard

SMPL-H model hits einsum error with hand poses

Open gngdb opened this issue 2 years ago • 5 comments

Trying to pass a hand pose tensor of shape (1, 45), ie batch size 1 and 45 for the 15 hand angles listed here. I get the following error:

Traceback (most recent call last):
  File "/home/gngdb/repos/smplx/transfer_model/write_obj.py", line 123, in <module>
    main(model_folder, motion_file, model_type, ext=ext,
  File "/home/gngdb/repos/smplx/transfer_model/write_obj.py", line 57, in main
    output = model(
  File "/home/gngdb/repos/fairmotion/.venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/gngdb/repos/smplx/smplx/body_models.py", line 722, in forward
    left_hand_pose = torch.einsum(
  File "/home/gngdb/repos/fairmotion/.venv/lib/python3.9/site-packages/torch/functional.py", line 325, in einsum
    return einsum(equation, *_operands)
  File "/home/gngdb/repos/fairmotion/.venv/lib/python3.9/site-packages/torch/functional.py", line 327, in einsum
    return _VF.einsum(equation, operands)  # type: ignore[attr-defined]
RuntimeError: einsum(): operands do not broadcast with remapped shapes [original->remapped]: [1, 45]->[1, 1, 45] [6, 45]->[1, 45, 6]

The operation that fails is:

left_hand_pose = torch.einsum('bi,ij->bj', [left_hand_pose, self.left_hand_components])

The shapes of the input tensors are:

  • left_hand_pose: (torch.Size([1, 45]))
  • self.left_hand_components: torch.Size([6, 45])

Obviously, dimension i is 45 for the first tensor and 6 for the second, but they do correspond on the second dimension so maybe that's what is supposed to be reduced over? Or has the body model been loaded wrong?

gngdb avatar Jan 31 '22 21:01 gngdb

Fixing that so the dimensions correspond I hit an error 10 lines down because the pose_mean is now a different size to the full_pose concatenated:

  File "/home/gngdb/repos/smplx/smplx/body_models.py", line 731, in forward
    full_pose += self.pose_mean
RuntimeError: The size of tensor a (78) must match the size of tensor b (156) at non-singleton dimension 1

gngdb avatar Jan 31 '22 21:01 gngdb

The only thing I can think to do to work around this is set use_pca=False. Although, I'm not sure what that does, so I don't know if disabling it is going to cause problems later.

After disabling it, the issue with pose_mean disappears and now it hits an error during lbs:

Traceback (most recent call last):
  File "/home/gngdb/repos/smplx/transfer_model/write_obj.py", line 124, in <module>
    main(model_folder, motion_file, model_type, ext=ext,
  File "/home/gngdb/repos/smplx/transfer_model/write_obj.py", line 58, in main
    output = model(
  File "/home/gngdb/repos/fairmotion/.venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/gngdb/repos/smplx/smplx/body_models.py", line 734, in forward
    vertices, joints = lbs(betas, full_pose, self.v_template,
  File "/home/gngdb/repos/smplx/smplx/lbs.py", line 205, in lbs
    v_shaped = v_template + blend_shapes(betas, shapedirs)
  File "/home/gngdb/repos/smplx/smplx/lbs.py", line 291, in blend_shapes
    blend_shape = torch.einsum('bl,mkl->bmk', [betas, shape_disps])
  File "/home/gngdb/repos/fairmotion/.venv/lib/python3.9/site-packages/torch/functional.py", line 325, in einsum
    return einsum(equation, *_operands)
  File "/home/gngdb/repos/fairmotion/.venv/lib/python3.9/site-packages/torch/functional.py", line 327, in einsum
    return _VF.einsum(equation, operands)  # type: ignore[attr-defined]
RuntimeError: einsum(): the number of subscripts in the equation (2) does not match the number of dimensions (1) for operand 0 and no ellipsis was given

It appears to be because it's expecting to see num_betas=10 because of the issue described in #109 where num_betas is always set to 10. If I slice betas[:,:10]` before passing it to the forward pass then it doesn't hit the error.

However, if I comment out the line that sets num_betas to 10 then it also doesn't hit the error. I don't know if that is likely to cause other problems, unfortunately.

gngdb avatar Jan 31 '22 21:01 gngdb

Trying to pass a hand pose tensor of shape (1, 45), ie batch size 1 and 45 for the 15 hand angles listed here. I get the following error:

Traceback (most recent call last):
  File "/home/gngdb/repos/smplx/transfer_model/write_obj.py", line 123, in <module>
    main(model_folder, motion_file, model_type, ext=ext,
  File "/home/gngdb/repos/smplx/transfer_model/write_obj.py", line 57, in main
    output = model(
  File "/home/gngdb/repos/fairmotion/.venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/gngdb/repos/smplx/smplx/body_models.py", line 722, in forward
    left_hand_pose = torch.einsum(
  File "/home/gngdb/repos/fairmotion/.venv/lib/python3.9/site-packages/torch/functional.py", line 325, in einsum
    return einsum(equation, *_operands)
  File "/home/gngdb/repos/fairmotion/.venv/lib/python3.9/site-packages/torch/functional.py", line 327, in einsum
    return _VF.einsum(equation, operands)  # type: ignore[attr-defined]
RuntimeError: einsum(): operands do not broadcast with remapped shapes [original->remapped]: [1, 45]->[1, 1, 45] [6, 45]->[1, 45, 6]

The operation that fails is:

left_hand_pose = torch.einsum('bi,ij->bj', [left_hand_pose, self.left_hand_components])

The shapes of the input tensors are:

  • left_hand_pose: (torch.Size([1, 45]))
  • self.left_hand_components: torch.Size([6, 45])

Obviously, dimension i is 45 for the first tensor and 6 for the second, but they do correspond on the second dimension so maybe that's what is supposed to be reduced over? Or has the body model been loaded wrong?

Same question. How to fix it?

yijicheng avatar Sep 11 '22 11:09 yijicheng

I have the same problem. Does anyone have a solution?

Apeng-Rzp avatar Apr 11 '23 14:04 Apeng-Rzp

I solved it by updating the file from old SMPL file to updated SMPL file.

eehoeskrap avatar Apr 29 '24 05:04 eehoeskrap