imu-human-pose-pytorch icon indicating copy to clipboard operation
imu-human-pose-pytorch copied to clipboard

3D visualization

Open lisa676 opened this issue 4 years ago • 7 comments

Hi @CHUNYUWANG You did a great work. I want to confirm that this repository contains code for 3D visualization or just it is for validation or testing?

lisa676 avatar May 22 '20 03:05 lisa676

Hi @lan786 we have not merged 3D visualization parts into this repo

zhezh avatar Jun 19 '20 03:06 zhezh

@zhezh Hello, are there any plans to incorporate the 3D visualization part into this repo in the future?

Ly12346 avatar Oct 13 '21 08:10 Ly12346

Hi @Ly12346 We won't merge the visualization demo into this repo because it cannot run independently. However, I could briefly describe the implementation. It is developed with QT(pyqt) and pyqtgraph. We firstly capture sequence images and the corresponding 3D poses, then visualize the image by QtWidgets.QLabel and the 3D pose by gl.GLLinePlotItem. The tricky part is that you need to be very careful with the 3D coordinated transforming.

zhezh avatar Oct 13 '21 11:10 zhezh

@zhezh Thank you very much for sharing!

Ly12346 avatar Oct 16 '21 11:10 Ly12346

Hi @zhezh I have another question to ask you. Regarding the IMU part, I did not find specific information about the original IMU mentioned in the paper. I want to know the details about the IMU part because of the experiment needs, such as the model of the IMU, or where it is You can buy it somewhere.

Ly12346 avatar Oct 21 '21 03:10 Ly12346

@Ly12346 We use IMU measurements from the dataset "totalcapture" which uses Xsense suits of IMUs.

zhezh avatar Oct 21 '21 03:10 zhezh

@zhezh thank you very much。

Ly12346 avatar Oct 21 '21 03:10 Ly12346