I2L-MeshNet_RELEASE icon indicating copy to clipboard operation
I2L-MeshNet_RELEASE copied to clipboard

Have you used 3d loss for fitting about simplify-X?

Open zhLawliet opened this issue 3 years ago • 25 comments

Thanks a lot for sharing the Simplify-X fits for H36M. Did you use 3d loss when fitting? I found that his side view is slanted, indicating that the depth is incorrect 企业微信截图_da506ca2-92b9-4453-a86b-dff84b1d845e

zhLawliet avatar Aug 11 '21 09:08 zhLawliet

The fits are in world coordinate system. You should apply camera extrinsics for other view redering.

mks0601 avatar Aug 11 '21 12:08 mks0601

@mks0601 thanks,i find you have merged root pose and camera rotation. # merge root pose and camera rotation root_pose = smpl_pose[self.root_joint_idx,:].numpy() root_pose, _ = cv2.Rodrigues(root_pose) root_pose, _ = cv2.Rodrigues(np.dot(R,root_pose)) smpl_pose[self.root_joint_idx] = torch.from_numpy(root_pose).view(3)

I try to understand what you mean is that the R corresponds to the x-y view,the R is not suitable for z-y view? if I want to show the z-y, I need other R? the code is:

smpl_mesh_coord, smpl_joint_coord = self.smpl.layer['neutral'](smpl_pose, smpl_shape) smpl_mesh_coord = smpl_mesh_coord.numpy().astype(np.float32).reshape(-1,3); fit_mesh_coord_cam = smpl_mesh_coord[...,[2,1,0]] xyz->zyx fit_mesh_coord_cam = (fit_mesh_coord_cam + 1)/2 * 255 vis(fit_mesh_coord_cam)

zhLawliet avatar Aug 12 '21 03:08 zhLawliet

I can't understand your question.. R is just a rotation matrix, included in the camera extrinsic parameter.

mks0601 avatar Aug 12 '21 07:08 mks0601

yes, the camera extrinsic parameter include the R and the T ,I think the fit_mesh_coord_cam have applied camera extrinsics by the " merge root pose and camera rotation". but the his side view is slanted.

zhLawliet avatar Aug 12 '21 07:08 zhLawliet

How did you visualize your results?

mks0601 avatar Aug 12 '21 07:08 mks0601

The code is for side view: pose, shape, trans = smpl_param['pose'], smpl_param['shape'], smpl_param['trans'] smpl_pose = torch.FloatTensor(pose).view(-1,3); smpl_shape = torch.FloatTensor(shape).view(1,-1); # smpl parameters (pose: 72 dimension, shape: 10 dimension) R, t = np.array(cam_param['R'], dtype=np.float32).reshape(3,3), np.array(cam_param['t'], dtype=np.float32).reshape(3) # camera rotation and translation # merge root pose and camera rotation root_pose = smpl_pose[self.root_joint_idx,:].numpy() root_pose, _ = cv2.Rodrigues(root_pose) root_pose, _ = cv2.Rodrigues(np.dot(R,root_pose)) smpl_pose[self.root_joint_idx] = torch.from_numpy(root_pose).view(3) smpl_mesh_coord, smpl_joint_coord = self.smpl.layer['neutral'](smpl_pose, smpl_shape) smpl_mesh_coord = smpl_mesh_coord.numpy().astype(np.float32).reshape(-1,3); fit_mesh_coord_cam = smpl_mesh_coord[...,[2,1,0]] #xyz->zyx fit_mesh_coord_cam = (fit_mesh_coord_cam + 1)/2 * 255 fit_mesh_coord_cam = vis_mesh(img.copy(), fit_mesh_coord_cam, radius=1,color = (0,0,255),IS_cmap = False)

zhLawliet avatar Aug 12 '21 07:08 zhLawliet

what is this line?

fit_mesh_coord_cam = smpl_mesh_coord[...,[2,1,0]] #xyz->zyx

and why not apply extrinsic translation?

mks0601 avatar Aug 12 '21 07:08 mks0601

Could you follow my codes in Human36M/Human36M.py?

mks0601 avatar Aug 12 '21 07:08 mks0601

yes, I follow your codes in Human36M/Human36M.py,i can get right result about front view, which have applied extrinsic translation(R,T) and internal parameters(cam_param['focal'], cam_param['princpt']) image

企业微信截图_593f1457-e879-441a-b59d-1a564cde4bd1 the original coordinate system is x,y,z , this line is to convert the coordinates xyz->xzy for side view, I think the smpl_mesh_coord have applied extrinsic translation by the "merge root pose and camera rotation",there is no internal parameters for xzy, so I just want to visualize the orientation of the whole about side view.

zhLawliet avatar Aug 12 '21 08:08 zhLawliet

I can't get what is 'internal parameters'. You can just apply extrinsics without axis transpose like xyz->zyx.

mks0601 avatar Aug 12 '21 08:08 mks0601

thanks,the 'internal parameters' is cam_param['focal'] and cam_param['princpt'],there is just one extrinsics for front view, now I want to visualize the orientation of the whole about side view. my unclear description may confuse you. Maybe I need to change the question: How can I get the correct side view?

zhLawliet avatar Aug 12 '21 08:08 zhLawliet

The extrinsics are defined for all camera viewpoints. You can apply extrinsics of the side viewpoint.

mks0601 avatar Aug 12 '21 08:08 mks0601

Thanks for your patient reply, I try it

zhLawliet avatar Aug 12 '21 08:08 zhLawliet

@mks0601 Can you provide the benchmark code for 3DPW challenge? how can I reproduce the competition performance image

zhLawliet avatar Oct 11 '21 09:10 zhLawliet

Most of codes of the winning entry of 3DPW challenge is based on this repo. The tracking codes are newly added though.

mks0601 avatar Oct 11 '21 11:10 mks0601

Thank you for your reply,your I2L-MeshNet wons the first and second place at 3DPW challenge on unknown assocation track which is not allowed to use ground truth data in any form, so how can you get right person for multi person. Another question is that "bbox_root_pw3d_output.json" is just for 3DPW_test.json, but the above 3DPW challenge use the entire dataset including its train, validation and test splits for evaluation. so It's my pleasure that you can release this part of the code about the ECCV2020 3DPW challenge.

zhLawliet avatar Oct 11 '21 12:10 zhLawliet

Q. how can you get right person for multi person -> I used yolov5 human detector. Q. I used param stage of I2L-MeshNet, so rootnet output is not required.

mks0601 avatar Oct 11 '21 12:10 mks0601

thank you, I understand. can release the part of the code that submit the result for ECCV2020 3DPW challenge

zhLawliet avatar Oct 11 '21 12:10 zhLawliet

Sorry I don't have codes for the 3DPW challenge. But there is no big change from this repo.

mks0601 avatar Oct 11 '21 13:10 mks0601

thanks,I try it

zhLawliet avatar Oct 12 '21 02:10 zhLawliet

@mks0601 Can you share your all results of yolov4 for 3DPW, which is used for 3DPW challenge. There is only the YOLO.json for test datasets: "data/PW3D/Human_detection_result/YOLO.json". I tried to get bbox through yolo4 by myself, but it couldn’t match yours effectively, thanks.

zhLawliet avatar Oct 14 '21 13:10 zhLawliet

Sorry we don't have. Which problem do you have?

mks0601 avatar Oct 14 '21 14:10 mks0601

This should be a tracking issue,I want to reproduce your competition performance which wons the first and second place at 3DPW challenge on unknown assocation track. There are multiple candidate boxs in each frame by yolov4. How to choose the best matching box, especially for multiple people and scenes with overlapping characters. such as image image

zhLawliet avatar Oct 15 '21 02:10 zhLawliet

Most of codes of the winning entry of 3DPW challenge is based on this repo. The tracking codes are newly added though.

we added human tracking codes as mentioned above.

mks0601 avatar Oct 15 '21 03:10 mks0601

ok, thanks

zhLawliet avatar Oct 15 '21 03:10 zhLawliet