HumanML3D icon indicating copy to clipboard operation
HumanML3D copied to clipboard

visualization of rot in 263-vector

Open Ying156209 opened this issue 2 years ago • 13 comments

Hi, I try to visualize the joints rotation representation, since converting from XYZ to joints rotation takes time. And the result seems not right. Here is my script. I have checked all the joints indexes and also make sure the bvh construction is correct.

from common.quaternion import *
from paramUtil import *
import sys
from rotation_conversions import *

mean = np.load('./HumanML3D/Mean.npy')
std = np.load('./HumanML3D/Std.npy')
ref2 = np.load('./HumanML3D/new_joint_vecs/012314.npy')

def recover_rot(data):
    # dataset [bs, seqlen, 263/251] HumanML/KIT
    joints_num = 22 if data.shape[-1] == 263 else 21
    data = torch.Tensor(data)
    r_rot_quat, r_pos = recover_root_rot_pos(data)
    r_pos_pad = torch.cat([r_pos, torch.zeros_like(r_pos)], dim=-1).unsqueeze(-2)
    r_rot_cont6d = quaternion_to_cont6d(r_rot_quat)
    start_indx = 1 + 2 + 1 + (joints_num - 1) * 3
    end_indx = start_indx + (joints_num - 1) * 6
    cont6d_params = data[..., start_indx:end_indx]
    cont6d_params = torch.cat([r_rot_cont6d, cont6d_params], dim=-1)
    cont6d_params = cont6d_params.view(-1, joints_num, 6)   # frames, joints, joints_dim
    cont6d_params = torch.cat([cont6d_params, r_pos_pad], dim=-2)
    return cont6d_params
   
def feats2rots(features):   
    features = features * std + mean
    return recover_rot(features)

rot6d _all = feats2rots(ref2).numpy()
rot6d_all = rot6d_all.reshape(-1, 23,6)
rot6d_all_trans = rot6d_all[:,-1,:3]
rot6d_all_rot = rot6d_all[:,:-1]

matrix =  rotation_6d_to_matrix(torch.Tensor(rot6d_all_rot))  
euler = matrix_to_euler_angles(matrix, "XYZ")
euler = euler/np.pi*180
np.save('euler_rot_gt.npy', euler)
np.save('trans_rot_gt.npy',rot6d_all_trans)

the func 'rotation_6d_to_matrix' and 'matrix_to_euler_angles' are from https://github.com/Mathux/ACTOR/blob/master/src/utils/rotation_conversions.py I got the skeleton results like this. (w/wo mean and std makes little different ) image

Have you tried to visualize rots? Need your help. Thanks so much!!

Ying156209 avatar Jan 05 '23 07:01 Ying156209

I meet the same problem.

Kebii avatar Jan 05 '23 08:01 Kebii

Hi Ying156209, I have a question to ask you, how does Blender render and export the eps file to achieve the effect in the paper

eanson023 avatar Jan 08 '23 12:01 eanson023

Hi Ying156209, I have a question to ask you, how does Blender render and export the eps file to achieve the effect in the paper

Hi, to visualize in Blender, TEMOS provides a good introduction. https://github.com/Mathux/TEMOS

EricGuo5513 avatar Jan 09 '23 18:01 EricGuo5513

Hi, I try to visualize the joints rotation representation, since converting from XYZ to joints rotation takes time. And the result seems not right. Here is my script. I have checked all the joints indexes and also make sure the bvh construction is correct.

from common.quaternion import *
from paramUtil import *
import sys
from rotation_conversions import *

mean = np.load('./HumanML3D/Mean.npy')
std = np.load('./HumanML3D/Std.npy')
ref2 = np.load('./HumanML3D/new_joint_vecs/012314.npy')

def recover_rot(data):
    # dataset [bs, seqlen, 263/251] HumanML/KIT
    joints_num = 22 if data.shape[-1] == 263 else 21
    data = torch.Tensor(data)
    r_rot_quat, r_pos = recover_root_rot_pos(data)
    r_pos_pad = torch.cat([r_pos, torch.zeros_like(r_pos)], dim=-1).unsqueeze(-2)
    r_rot_cont6d = quaternion_to_cont6d(r_rot_quat)
    start_indx = 1 + 2 + 1 + (joints_num - 1) * 3
    end_indx = start_indx + (joints_num - 1) * 6
    cont6d_params = data[..., start_indx:end_indx]
    cont6d_params = torch.cat([r_rot_cont6d, cont6d_params], dim=-1)
    cont6d_params = cont6d_params.view(-1, joints_num, 6)   # frames, joints, joints_dim
    cont6d_params = torch.cat([cont6d_params, r_pos_pad], dim=-2)
    return cont6d_params
   
def feats2rots(features):   
    features = features * std + mean
    return recover_rot(features)

rot6d _all = feats2rots(ref2).numpy()
rot6d_all = rot6d_all.reshape(-1, 23,6)
rot6d_all_trans = rot6d_all[:,-1,:3]
rot6d_all_rot = rot6d_all[:,:-1]

matrix =  rotation_6d_to_matrix(torch.Tensor(rot6d_all_rot))  
euler = matrix_to_euler_angles(matrix, "XYZ")
euler = euler/np.pi*180
np.save('euler_rot_gt.npy', euler)
np.save('trans_rot_gt.npy',rot6d_all_trans)

the func 'rotation_6d_to_matrix' and 'matrix_to_euler_angles' are from https://github.com/Mathux/ACTOR/blob/master/src/utils/rotation_conversions.py I got the skeleton results like this. (w/wo mean and std makes little different ) image

Have you tried to visualize rots? Need your help. Thanks so much!!

Hello, unfortunately, I didn't tried to visualize the rots directly. Usually I will transform it to xyz coordinates. Also, in generation, we do not use the generated rots. They only play the role of regularization. For your case, I try to give some comments:

  1. The new_joint_vecs are not normalized, you don't need to recover them.
  2. For 6d->matrix, you could use cont6d_to_matrix in quaternion.py, not sure if this makes difference.
  3. You could try different matrix_to_euler_angles functions, for Blender, not sure if it need extrinsic/intrinsic parameters.

EricGuo5513 avatar Jan 09 '23 19:01 EricGuo5513

Hi. Can I recover the joints rotation from the position? I tried the inverse_kinematics_np func from Skeleton class. However, it seems not to work properly.

XinandYu avatar Mar 29 '23 19:03 XinandYu

Hi, I try to visualize the joints rotation representation, since converting from XYZ to joints rotation takes time. And the result seems not right. Here is my script. I have checked all the joints indexes and also make sure the bvh construction is correct.

from common.quaternion import *
from paramUtil import *
import sys
from rotation_conversions import *

mean = np.load('./HumanML3D/Mean.npy')
std = np.load('./HumanML3D/Std.npy')
ref2 = np.load('./HumanML3D/new_joint_vecs/012314.npy')

def recover_rot(data):
    # dataset [bs, seqlen, 263/251] HumanML/KIT
    joints_num = 22 if data.shape[-1] == 263 else 21
    data = torch.Tensor(data)
    r_rot_quat, r_pos = recover_root_rot_pos(data)
    r_pos_pad = torch.cat([r_pos, torch.zeros_like(r_pos)], dim=-1).unsqueeze(-2)
    r_rot_cont6d = quaternion_to_cont6d(r_rot_quat)
    start_indx = 1 + 2 + 1 + (joints_num - 1) * 3
    end_indx = start_indx + (joints_num - 1) * 6
    cont6d_params = data[..., start_indx:end_indx]
    cont6d_params = torch.cat([r_rot_cont6d, cont6d_params], dim=-1)
    cont6d_params = cont6d_params.view(-1, joints_num, 6)   # frames, joints, joints_dim
    cont6d_params = torch.cat([cont6d_params, r_pos_pad], dim=-2)
    return cont6d_params
   
def feats2rots(features):   
    features = features * std + mean
    return recover_rot(features)

rot6d _all = feats2rots(ref2).numpy()
rot6d_all = rot6d_all.reshape(-1, 23,6)
rot6d_all_trans = rot6d_all[:,-1,:3]
rot6d_all_rot = rot6d_all[:,:-1]

matrix =  rotation_6d_to_matrix(torch.Tensor(rot6d_all_rot))  
euler = matrix_to_euler_angles(matrix, "XYZ")
euler = euler/np.pi*180
np.save('euler_rot_gt.npy', euler)
np.save('trans_rot_gt.npy',rot6d_all_trans)

the func 'rotation_6d_to_matrix' and 'matrix_to_euler_angles' are from https://github.com/Mathux/ACTOR/blob/master/src/utils/rotation_conversions.py I got the skeleton results like this. (w/wo mean and std makes little different ) image Have you tried to visualize rots? Need your help. Thanks so much!!

Hello, unfortunately, I didn't tried to visualize the rots directly. Usually I will transform it to xyz coordinates. Also, in generation, we do not use the generated rots. They only play the role of regularization. For your case, I try to give some comments:

  1. The new_joint_vecs are not normalized, you don't need to recover them.
  2. For 6d->matrix, you could use cont6d_to_matrix in quaternion.py, not sure if this makes difference.
  3. You could try different matrix_to_euler_angles functions, for Blender, not sure if it need extrinsic/intrinsic parameters.

And I got the same result as this picture. I'm wondering it's because of the limitation of the algrithm or I use this in a wrong way. Looking forward to your reply.

XinandYu avatar Mar 30 '23 08:03 XinandYu

I meet the same problem, it seems that the rot in the 263-vector can not yield the correct visualization result. Did you solve that problem?

wyhuai avatar Apr 04 '23 14:04 wyhuai

I meet the same problem, it seems that the rot in the 263-vector can not yield the correct visualization result. Did you solve that problem?

I meet the same problem, it seems that the rot in the 263-vector can not yield the correct visualization result. Did you solve that problem?

Not yet. I'm trying to figure out what's going on in the code. I assume that there is no bug in the code. In my view, the reason might be: It calculates the local rotation and the input should be the world rotation. Analytic method might suffer from that terrible result.

Ssstirm avatar Apr 05 '23 08:04 Ssstirm

Hi, I got a lot of comments that our current rotation representation seems not compatible to other 3D softwares like blender. I kind of get the reason. In IK/FK in skeleton.py, for i_th bone, we are calculating the rotations for itself. While in bvh, actually we should get the rotations of it parent instead. Therefore, in line 91, you could try to use its parent bone, instead of the bone itself. I am not sure if it works. Here I attach the codes of our FK and bvh FK, you may see the difference while obtaining global positions: Our FK:

for i in range(1, len(chain)): 
      R = qmul(R, quat_params[:, chain[i]])
      offset_vec = offsets[:, chain[i]]
      joints[:, chain[i]] = qrot(R, offset_vec) + joints[:, chain[i-1]]

BVH FK:

for i in range(1, len(self.parents)): 
    global_quats[:, i] = qmul(global_quats[:, self.parents[i]], local_quats[:, i])
    global_pos[:, i] = qrot(global_quats[:, self.parents[i]], offsets[:, i]) + global_pos[:, self.parents[i]]

Hope this helps you. I do not have time to validate this idea. But if anyone figure it out in this or any other ways, I would appreciate so much if you could let me know. If it does not work, I know the recent work ReMoDiffuse managed to use the rotation representation in their demo. You may refer to them.

BTW: I have updated the quaternion_euler_cont6d functions in quaternion.py, which should be safe to use.

EricGuo5513 avatar Apr 17 '23 17:04 EricGuo5513

Hi, thanks for creating this useful dataset and amazing work in text2motion!

I would like to ask a question about 263-vector. I know the shape = (#frame, 263), and it contains local velocity, rotation, rotation velocity, and foot contact...etc. However, I don't know the index range for each of them. So may I have the detailed description about the it?

yufu-liu avatar Jul 26 '23 03:07 yufu-liu

Hi, the meaning of each entries are as follows:

root_rot_velocity (B, seq_len, 1)# root_linear_velocity (B, seq_len,

2)# root_y (B, seq_len, 1)# ric_data (B, seq_len, (joint_num - 1)*3)# rot_data (B, seq_len, (joint_num - 1)6)# local_velocity (B, seq_len, joint_num3)# foot contact (B, seq_len, 4)

yufu-liu @.***> 于2023年7月25日周二 21:12写道:

Hi, thanks for creating this useful dataset and amazing work in text2motion!

I would like to ask a question about 263-vector. I know the shape = (#frame, 263), and it contains local velocity, rotation, rotation velocity, and foot contact...etc. However, I don't know the index range for each of them. So may I have the detailed description about the it?

— Reply to this email directly, view it on GitHub https://github.com/EricGuo5513/HumanML3D/issues/26#issuecomment-1650910468, or unsubscribe https://github.com/notifications/unsubscribe-auth/AKRYNB3C5W46QW4WFBKVARDXSCDJPANCNFSM6AAAAAATRUFCCY . You are receiving this because you commented.Message ID: @.***>

EricGuo5513 avatar Jul 27 '23 01:07 EricGuo5513

Hi Eric,

You mentioned that the rot data only play the role of regularization. Could you explain a little bit about this claim? Thanks.

mingdianliu avatar Jul 28 '23 21:07 mingdianliu

I got the same problems as above. This dataset has been used a lot in the animation industry. Can you @EricGuo5513 please confirm if is suitable for 3D softwares. I think we could help to edit the data and make it compatible otherwise, so further research is compatible with 3D softwares. I wish someone could confirm whether it's our wrong interpretation or it's not possible to map this to 3D softwares.

Stefano-retinize avatar Aug 18 '24 18:08 Stefano-retinize