HumanML3D
HumanML3D copied to clipboard
visualization of rot in 263-vector
Hi, I try to visualize the joints rotation representation, since converting from XYZ to joints rotation takes time. And the result seems not right. Here is my script. I have checked all the joints indexes and also make sure the bvh construction is correct.
from common.quaternion import *
from paramUtil import *
import sys
from rotation_conversions import *
mean = np.load('./HumanML3D/Mean.npy')
std = np.load('./HumanML3D/Std.npy')
ref2 = np.load('./HumanML3D/new_joint_vecs/012314.npy')
def recover_rot(data):
# dataset [bs, seqlen, 263/251] HumanML/KIT
joints_num = 22 if data.shape[-1] == 263 else 21
data = torch.Tensor(data)
r_rot_quat, r_pos = recover_root_rot_pos(data)
r_pos_pad = torch.cat([r_pos, torch.zeros_like(r_pos)], dim=-1).unsqueeze(-2)
r_rot_cont6d = quaternion_to_cont6d(r_rot_quat)
start_indx = 1 + 2 + 1 + (joints_num - 1) * 3
end_indx = start_indx + (joints_num - 1) * 6
cont6d_params = data[..., start_indx:end_indx]
cont6d_params = torch.cat([r_rot_cont6d, cont6d_params], dim=-1)
cont6d_params = cont6d_params.view(-1, joints_num, 6) # frames, joints, joints_dim
cont6d_params = torch.cat([cont6d_params, r_pos_pad], dim=-2)
return cont6d_params
def feats2rots(features):
features = features * std + mean
return recover_rot(features)
rot6d _all = feats2rots(ref2).numpy()
rot6d_all = rot6d_all.reshape(-1, 23,6)
rot6d_all_trans = rot6d_all[:,-1,:3]
rot6d_all_rot = rot6d_all[:,:-1]
matrix = rotation_6d_to_matrix(torch.Tensor(rot6d_all_rot))
euler = matrix_to_euler_angles(matrix, "XYZ")
euler = euler/np.pi*180
np.save('euler_rot_gt.npy', euler)
np.save('trans_rot_gt.npy',rot6d_all_trans)
the func 'rotation_6d_to_matrix' and 'matrix_to_euler_angles' are from https://github.com/Mathux/ACTOR/blob/master/src/utils/rotation_conversions.py
I got the skeleton results like this. (w/wo mean and std makes little different )
Have you tried to visualize rots? Need your help. Thanks so much!!
I meet the same problem.
Hi Ying156209, I have a question to ask you, how does Blender render and export the eps file to achieve the effect in the paper
Hi Ying156209, I have a question to ask you, how does Blender render and export the eps file to achieve the effect in the paper
Hi, to visualize in Blender, TEMOS provides a good introduction. https://github.com/Mathux/TEMOS
Hi, I try to visualize the joints rotation representation, since converting from XYZ to joints rotation takes time. And the result seems not right. Here is my script. I have checked all the joints indexes and also make sure the bvh construction is correct.
from common.quaternion import * from paramUtil import * import sys from rotation_conversions import * mean = np.load('./HumanML3D/Mean.npy') std = np.load('./HumanML3D/Std.npy') ref2 = np.load('./HumanML3D/new_joint_vecs/012314.npy') def recover_rot(data): # dataset [bs, seqlen, 263/251] HumanML/KIT joints_num = 22 if data.shape[-1] == 263 else 21 data = torch.Tensor(data) r_rot_quat, r_pos = recover_root_rot_pos(data) r_pos_pad = torch.cat([r_pos, torch.zeros_like(r_pos)], dim=-1).unsqueeze(-2) r_rot_cont6d = quaternion_to_cont6d(r_rot_quat) start_indx = 1 + 2 + 1 + (joints_num - 1) * 3 end_indx = start_indx + (joints_num - 1) * 6 cont6d_params = data[..., start_indx:end_indx] cont6d_params = torch.cat([r_rot_cont6d, cont6d_params], dim=-1) cont6d_params = cont6d_params.view(-1, joints_num, 6) # frames, joints, joints_dim cont6d_params = torch.cat([cont6d_params, r_pos_pad], dim=-2) return cont6d_params def feats2rots(features): features = features * std + mean return recover_rot(features) rot6d _all = feats2rots(ref2).numpy() rot6d_all = rot6d_all.reshape(-1, 23,6) rot6d_all_trans = rot6d_all[:,-1,:3] rot6d_all_rot = rot6d_all[:,:-1] matrix = rotation_6d_to_matrix(torch.Tensor(rot6d_all_rot)) euler = matrix_to_euler_angles(matrix, "XYZ") euler = euler/np.pi*180 np.save('euler_rot_gt.npy', euler) np.save('trans_rot_gt.npy',rot6d_all_trans)
the func 'rotation_6d_to_matrix' and 'matrix_to_euler_angles' are from https://github.com/Mathux/ACTOR/blob/master/src/utils/rotation_conversions.py I got the skeleton results like this. (w/wo mean and std makes little different )
Have you tried to visualize rots? Need your help. Thanks so much!!
Hello, unfortunately, I didn't tried to visualize the rots directly. Usually I will transform it to xyz coordinates. Also, in generation, we do not use the generated rots. They only play the role of regularization. For your case, I try to give some comments:
- The new_joint_vecs are not normalized, you don't need to recover them.
- For 6d->matrix, you could use cont6d_to_matrix in quaternion.py, not sure if this makes difference.
- You could try different matrix_to_euler_angles functions, for Blender, not sure if it need extrinsic/intrinsic parameters.
Hi.
Can I recover the joints rotation from the position? I tried the inverse_kinematics_np
func from Skeleton class. However, it seems not to work properly.
Hi, I try to visualize the joints rotation representation, since converting from XYZ to joints rotation takes time. And the result seems not right. Here is my script. I have checked all the joints indexes and also make sure the bvh construction is correct.
from common.quaternion import * from paramUtil import * import sys from rotation_conversions import * mean = np.load('./HumanML3D/Mean.npy') std = np.load('./HumanML3D/Std.npy') ref2 = np.load('./HumanML3D/new_joint_vecs/012314.npy') def recover_rot(data): # dataset [bs, seqlen, 263/251] HumanML/KIT joints_num = 22 if data.shape[-1] == 263 else 21 data = torch.Tensor(data) r_rot_quat, r_pos = recover_root_rot_pos(data) r_pos_pad = torch.cat([r_pos, torch.zeros_like(r_pos)], dim=-1).unsqueeze(-2) r_rot_cont6d = quaternion_to_cont6d(r_rot_quat) start_indx = 1 + 2 + 1 + (joints_num - 1) * 3 end_indx = start_indx + (joints_num - 1) * 6 cont6d_params = data[..., start_indx:end_indx] cont6d_params = torch.cat([r_rot_cont6d, cont6d_params], dim=-1) cont6d_params = cont6d_params.view(-1, joints_num, 6) # frames, joints, joints_dim cont6d_params = torch.cat([cont6d_params, r_pos_pad], dim=-2) return cont6d_params def feats2rots(features): features = features * std + mean return recover_rot(features) rot6d _all = feats2rots(ref2).numpy() rot6d_all = rot6d_all.reshape(-1, 23,6) rot6d_all_trans = rot6d_all[:,-1,:3] rot6d_all_rot = rot6d_all[:,:-1] matrix = rotation_6d_to_matrix(torch.Tensor(rot6d_all_rot)) euler = matrix_to_euler_angles(matrix, "XYZ") euler = euler/np.pi*180 np.save('euler_rot_gt.npy', euler) np.save('trans_rot_gt.npy',rot6d_all_trans)
the func 'rotation_6d_to_matrix' and 'matrix_to_euler_angles' are from https://github.com/Mathux/ACTOR/blob/master/src/utils/rotation_conversions.py I got the skeleton results like this. (w/wo mean and std makes little different )
Have you tried to visualize rots? Need your help. Thanks so much!!
Hello, unfortunately, I didn't tried to visualize the rots directly. Usually I will transform it to xyz coordinates. Also, in generation, we do not use the generated rots. They only play the role of regularization. For your case, I try to give some comments:
- The new_joint_vecs are not normalized, you don't need to recover them.
- For 6d->matrix, you could use cont6d_to_matrix in quaternion.py, not sure if this makes difference.
- You could try different matrix_to_euler_angles functions, for Blender, not sure if it need extrinsic/intrinsic parameters.
And I got the same result as this picture. I'm wondering it's because of the limitation of the algrithm or I use this in a wrong way. Looking forward to your reply.
I meet the same problem, it seems that the rot in the 263-vector can not yield the correct visualization result. Did you solve that problem?
I meet the same problem, it seems that the rot in the 263-vector can not yield the correct visualization result. Did you solve that problem?
I meet the same problem, it seems that the rot in the 263-vector can not yield the correct visualization result. Did you solve that problem?
Not yet. I'm trying to figure out what's going on in the code. I assume that there is no bug in the code. In my view, the reason might be: It calculates the local rotation and the input should be the world rotation. Analytic method might suffer from that terrible result.
Hi, I got a lot of comments that our current rotation representation seems not compatible to other 3D softwares like blender. I kind of get the reason. In IK/FK in skeleton.py, for i_th bone, we are calculating the rotations for itself. While in bvh, actually we should get the rotations of it parent instead. Therefore, in line 91, you could try to use its parent bone, instead of the bone itself. I am not sure if it works. Here I attach the codes of our FK and bvh FK, you may see the difference while obtaining global positions: Our FK:
for i in range(1, len(chain)):
R = qmul(R, quat_params[:, chain[i]])
offset_vec = offsets[:, chain[i]]
joints[:, chain[i]] = qrot(R, offset_vec) + joints[:, chain[i-1]]
BVH FK:
for i in range(1, len(self.parents)):
global_quats[:, i] = qmul(global_quats[:, self.parents[i]], local_quats[:, i])
global_pos[:, i] = qrot(global_quats[:, self.parents[i]], offsets[:, i]) + global_pos[:, self.parents[i]]
Hope this helps you. I do not have time to validate this idea. But if anyone figure it out in this or any other ways, I would appreciate so much if you could let me know. If it does not work, I know the recent work ReMoDiffuse managed to use the rotation representation in their demo. You may refer to them.
BTW: I have updated the quaternion_euler_cont6d functions in quaternion.py, which should be safe to use.
Hi, thanks for creating this useful dataset and amazing work in text2motion!
I would like to ask a question about 263-vector. I know the shape = (#frame, 263), and it contains local velocity, rotation, rotation velocity, and foot contact...etc. However, I don't know the index range for each of them. So may I have the detailed description about the it?
Hi, the meaning of each entries are as follows:
root_rot_velocity (B, seq_len, 1)# root_linear_velocity (B, seq_len,
2)# root_y (B, seq_len, 1)# ric_data (B, seq_len, (joint_num - 1)*3)# rot_data (B, seq_len, (joint_num - 1)6)# local_velocity (B, seq_len, joint_num3)# foot contact (B, seq_len, 4)
yufu-liu @.***> 于2023年7月25日周二 21:12写道:
Hi, thanks for creating this useful dataset and amazing work in text2motion!
I would like to ask a question about 263-vector. I know the shape = (#frame, 263), and it contains local velocity, rotation, rotation velocity, and foot contact...etc. However, I don't know the index range for each of them. So may I have the detailed description about the it?
— Reply to this email directly, view it on GitHub https://github.com/EricGuo5513/HumanML3D/issues/26#issuecomment-1650910468, or unsubscribe https://github.com/notifications/unsubscribe-auth/AKRYNB3C5W46QW4WFBKVARDXSCDJPANCNFSM6AAAAAATRUFCCY . You are receiving this because you commented.Message ID: @.***>
Hi Eric,
You mentioned that the rot data only play the role of regularization. Could you explain a little bit about this claim? Thanks.
I got the same problems as above. This dataset has been used a lot in the animation industry. Can you @EricGuo5513 please confirm if is suitable for 3D softwares. I think we could help to edit the data and make it compatible otherwise, so further research is compatible with 3D softwares. I wish someone could confirm whether it's our wrong interpretation or it's not possible to map this to 3D softwares.