DeepMimic icon indicating copy to clipboard operation
DeepMimic copied to clipboard

Skills From Videos

Open zdlarry opened this issue 5 years ago • 12 comments

Find not work well when I use the theta from the vision-based pose estimators to make a mocap data, is it necessary to perform additional operations on theta besides the operations mentioned in the paper SFV?

zdlarry avatar May 01 '19 15:05 zdlarry

@Dz97313 How did you make a mocap data, I could get a mocap data with hmr like SFV.But I can only get the 3d position not quaternion,can you get the quaternion?

highway007 avatar May 07 '19 02:05 highway007

Which video are you trying to imitate? what does the reconstructed reference motion look like? If the pose estimator is not able to generate a good reference motion, then imitation learning will likely not work well.

xbpeng avatar May 07 '19 02:05 xbpeng

@highway007 I get the quaternion from the theta contained in the hmr's output. And How can you get the 3d positions from hmr ? Have Your 3d positions transform into world's axis?

zdlarry avatar May 07 '19 18:05 zdlarry

@xbpeng I imitate the cartwheel video. But I did not re-trained the hmr model with your method that enhancing the images data, I simply rotated the upside down person to a normal one to predict the rotation params cause I thought the rotation params supposed to be same. But Hmr Model can not give a good predict one here and Openpose Model miss some joints infos at times. So can you help me out of trouble, THX!

zdlarry avatar May 07 '19 19:05 zdlarry

Hi@Dz97313 From this line in hmr demo : joints, verts, cams, joints3d, theta = model.predict(input_img, get_theta=True) I got joints3d and it's localposition(not world) .Also I see: pose = theta[:, 3:75] # This is the 1 x 72 pose vector of SMPL, which is the rotation of 24 joints in axis angle format Did you transform the rotation of 24 joints to quaternion?

highway007 avatar May 08 '19 02:05 highway007

@highway007 I indeed transform the the pose to quaternion directly, by the result is not good. Sometimes the axis of rotation is disordered.

zdlarry avatar May 08 '19 05:05 zdlarry

@Dz97313 Hi, Can you tell how did you transform it to quaternion?(and there are some joints like knee only have 1D,how to transform it) I also wonder the way to get the root world position. Thx ;)

highway007 avatar May 08 '19 11:05 highway007

Yes, the coordinates for hmr are different from the ones in deepmimic. When retargeting the motion to the character, you should visualize it with args/kin_char_args.txt to make sure things are retargeted properly. Else the policy doesn't really have a chance of learning the right motion if the ref motion is wrong.

xbpeng avatar May 12 '19 16:05 xbpeng

Yes, the coordinates for hmr are different from the ones in deepmimic. When retargeting the motion to the character, you should visualize it with args/kin_char_args.txt to make sure things are retargeted properly. Else the policy doesn't really have a chance of learning the right motion if the ref motion is wrong.

Yeah, but the question is, how to retarget, regardless of the bind pose(T pose)'s different, the joint's count is different too... How to possibly retarget? what code do you use? could you please share if you have one.

Zju-George avatar Jul 30 '19 06:07 Zju-George

@highway007 I indeed transform the the pose to quaternion directly, by the result is not good. Sometimes the axis of rotation is disordered.

How's your work going? Did you find a good solution of retargeting? Please, I am on the same boat.

Zju-George avatar Jul 30 '19 06:07 Zju-George

@xbpeng There is no file called kin_char_args.txt under the folder args, is there?

yjc765 avatar Nov 06 '19 15:11 yjc765

sorry about that, kin_char_args.txt has been renamed to play_motion_humanoid3d_args.txt. I will fix the readme.

xbpeng avatar Nov 06 '19 18:11 xbpeng