humannerf
humannerf copied to clipboard
How to train humannerf with snap_shot?
I train humannerf with snap_shot, but produce empty image. Could you tell me how to train humannerf with snap_shot?
It's easy to train with snap_shot. I trained it already. If you followed the same procedure to generate the smpl like ROMP or VIBE, it might work.
you are getting the empty images, as certain values of ray_shoot are returning 0 values. So, there might be slight mismatch in your data
I found that the smpl generated by ROMP or VIBE is not complete right. I want to use the gt smpl and camera parameters in snap_shot. But something went wrong. Is it because the picture is 10801080 in snap_shot, but 10241024 in zju_mocap?
Both inage size are different. If you want to use the GT SMPL, all you have to so is create a new "process_dataset" file for snapshot and generate the necessary files for rendering.
Both inage size are different. If you want to use the GT SMPL, all you have to so is create a new "process_dataset" file for snapshot and generate the necessary files for rendering.
Thank you, my code may have some errors.
Both inage size are different. If you want to use the GT SMPL, all you have to so is create a new "process_dataset" file for snapshot and generate the necessary files for rendering.
Thank you, my code may have some errors.
That is a possibility. Feel free to ask for help
Hello @Dipankar1997161, I have the same problem of training humannerf with snap_shot. I think that the global orient in snap_shot is not the same as Rh in zju_mocap. Would you kindly share your process dataset file of snap_shot? Thanks a lot!
Hello @Dipankar1997161, I have the same problem of training humannerf with snap_shot. I think that the global orient in snap_shot is not the same as Rh in zju_mocap. Would you kindly share your process dataset file of snap_shot? Thanks a lot!
I trained the model on people snapshot but my rendering was like this https://github.com/chungyiweng/humannerf/issues/74#issue-1773038157
The axis alignment was an issue for me. We have make some changes for monocular smpl based data, because I was also trained several multi-view data with humannerf and they work perfectly good.
I will try it in the next coming week to see if I can solve the issue.
Hello @Dipankar1997161, I have the same problem of training humannerf with snap_shot. I think that the global orient in snap_shot is not the same as Rh in zju_mocap. Would you kindly share your process dataset file of snap_shot? Thanks a lot!
And regarding the Rh in Zju-mocap in comparison with global orient in people_snapshot, they are completely different. Zju-mocap generates the Rh by pushing the first poses value to Rh and replaces poses[:,3] to 0 The question is about Th. Is it similar as " trans" or do we need to use the t-pose joints for relevancy to create Th and then align all the joints with pelvis pose of Th. Check the prepare_wild.py u will get it
Hi! I also meet the same problem, have anyone solved this issue? My camera extrinsics is eye(4), intrinsics is based on snapshot. The pose is from snapshot and replaces poses[:,3] to 0. Rh = poses[:,3], and Th is based on pelvis pose from prepare_wild.py. Thank you very much!
Hi! I also meet the same problem, have anyone solved this issue? My camera extrinsics is eye(4), intrinsics is based on snapshot. The pose is from snapshot and replaces poses[:,3] to 0. Rh = poses[:,3], and Th is based on pelvis pose from prepare_wild.py. Thank you very much!
Actually this is the issue with SMPL parameters for Monocular videos. Assuming they don't have a specific camera parameter or Axis, the SMPLs generated are mostly with weak perspective camera( check PARE, ROMP).
One can make use the Correct camera parameters if given. From this repository- check this issue https://github.com/chungyiweng/humannerf/issues/74
For a better implementation of Humannerf- Check MonoHuman.
That repository might help you better.
Hi! I also meet the same problem, have anyone solved this issue? My camera extrinsics is eye(4), intrinsics is based on snapshot. The pose is from snapshot and replaces poses[:,3] to 0. Rh = poses[:,3], and Th is based on pelvis pose from prepare_wild.py. Thank you very much!
Actually this is the issue with SMPL parameters for Monocular videos. Assuming they don't have a specific camera parameter or Axis, the SMPLs generated are mostly with weak perspective camera( check PARE, ROMP).
One can make use the Correct camera parameters if given. From this repository- check this issue #74
For a better implementation of Humannerf- Check MonoHuman. That repository might help you better.
Thanks a lot for your quick reply! Based on the issue you referred to, it seems that I can directly use EasyMocap for people_snapshot dataset. If so maybe I will directly use that for data processing to keep the consistency with zju format. Or do you think I can use VIBE for camera and SMPL estimation?
Best.