RobustCap icon indicating copy to clipboard operation
RobustCap copied to clipboard

How to enter a video

Open point-1 opened this issue 2 years ago • 10 comments

hi, I want to input video path, output a result video. What should I do? Thank you

point-1 avatar Sep 14 '23 09:09 point-1

hi, our system needs video and IMU input to run. You can use view_aist function in evaluate.py to visualize the results. Before that, you should download the AIST++ video from its official site.

shaohua-pan avatar Sep 14 '23 09:09 shaohua-pan

I tried to input offline video and IMU data. Do I need to write forward_offline function in sig_mp.py file? Is it an image or a 2D sequence of joints?

xobeiotozi avatar Feb 18 '24 09:02 xobeiotozi

No need to rewrite the forward function. Just provide the 2d keypoints detected by mediapipe and transform them onto Z=1 plane, and the orientation and acceleration of 6 IMUs in the camera coordinate system. How to run the forward function can refer to evaluate.py.

shaohua-pan avatar Feb 23 '24 08:02 shaohua-pan

Due to the deformity of the generated 3D model, I have a few questions to ask.

  1. For camera correction, do I place the black and white checkerboard horizontally within camera range or vertically?
  2. Rectify the IMU. Does the initial posture also need to refer to the correction method in Transpose?
  3. I see that first_tran in evaluate.py is related to the rotation matrix of 24 keypoints in the first frame. How can I obtain it?

xobeiotozi avatar Mar 18 '24 06:03 xobeiotozi

  1. No, just take pictures in different directions until the demo ends.
  2. Yes.
  3. During test we use the gt tran of the first frame.

shaohua-pan avatar Mar 18 '24 07:03 shaohua-pan

  1. Don't you need to fix the camera position?
  2. If you want to achieve real-time effect, how to get the gt of the initial frame?

xobeiotozi avatar Mar 18 '24 07:03 xobeiotozi

  1. I dont know what is your problem. Camera correction means calibration? To calculate R or K? If you means calibration, you need to modify your camera‘’s position. If you means capture human motion, you should fix the camera.
  2. evaluate.py is the code to achieve the result in the paper. If you want to run live, please refer the live demo code.

shaohua-pan avatar Mar 18 '24 08:03 shaohua-pan

Hi, I still have some questions. Is the resulting pose data converted to the Euler Angle relative to Tpose? If I wnt to get the real value of this action, just seq='xyz' in this function rotation_matrix_to_euler_angle(r: torch.Tensor, seq='xyz')?

xobeiotozi avatar Apr 03 '24 02:04 xobeiotozi

Hello, I just corrected it according to the way of T-pose, the generated posture is crooked, and the movement is strange. Is it because I'm missing jump sync?

xobeiotozi avatar Apr 24 '24 11:04 xobeiotozi

Make sure to align the IMU axis with the human and camera axis. You can also check the IMU axis.

shaohua-pan avatar May 17 '24 08:05 shaohua-pan