tranducanhbk
tranducanhbk
After set L2 reg weight =1 and caculate 2D loss with (x,y) cordinate resize to size (8x8) i got this results  a little bit better but not enogh good....
I did not apply L2 reg on the SMPL shape parameter, I only caculaye loss with 3D body rotations (θ). i will try add SMPL shape parameter to caculate loss....
Could u share to me the loss function and forward of RotationNet. I am still reach to good results. Many thanks you
I alread follow from this repo but there are 2 things i don't clear: https://github.com/mks0601/Hand4Whole_RELEASE/blob/2f7a608cb05cf586d1e80c76e507d0802a6c13f0/main/config.py#L50 1) you calculated 2D joint loss in 256x256 space. so self.output_hm_shape = (8, 8, 6)...
I strictly follow what you guide but can not get good result. Below are losses i caculate for backward loss['joint_img'] = self.coord_loss(joint_img, smpl.reduce_joint_set(targets['joint_img'])/32., smpl.reduce_joint_set(meta_info['joint_trunc']), meta_info['is_3D']) for 2d loss of positionNet...
I did not use VPoser. Maybe it is big misktake i got. Thank you for your advise
I used Vposer to decode SMPL pose from 32 dimenstions but still not got good results. Below is code to get root_pose, body_pose, shape_param, cam_param # predict 32 dismention for...
I check in Vposer doc: "On the other hand, body poZ, VPoser's latent space representation for SMPL body, has in total 32 elements with a spherical Gaussian distribution. This means...
Finally i got better result  but some case it still wrong 
I am using SMPL. Stronger L2 regularizer weight mean increase L2 regularizer weight right?