SemGCN icon indicating copy to clipboard operation
SemGCN copied to clipboard

Results about real-time

Open lisa676 opened this issue 5 years ago • 18 comments

Hi @garyzhao Thanks so much for this repo, it's really really an amazing work. I have some questions about this work. 1: Which 2D method you used for 2D estimation? 2: Can we run this project for real-time problem? I mean can we use webcam or live camera in this project and what will be the speed (fps)? As authors did in VideoPose (https://github.com/facebookresearch/VideoPose3D/blob/master/INFERENCE.md)

lisa676 avatar Dec 10 '19 04:12 lisa676

Hi @lan786

Thanks for your interest in our work.

  1. We use a similar pipeline presented in "Simple Baselines for Human Pose Estimation and Tracking" for 2D pose estimation.

Some good alternatives may be: https://github.com/bearpaw/pytorch-pose https://github.com/JimmySuen/integral-human-pose

  1. Yes, I think so. But the speed largely depends on the 2D detector you use. If you have a GPU, e.g., GTX1080ti, the speed may be around 20 to 50fps.

Best, Long

garyzhao avatar Dec 13 '19 06:12 garyzhao

Hi @garyzhao, This repo is indeed of good quality and help. Thank you for this repo. What are the possible steps to be taken to do an inference on SemGCN, using this repo? You have given here the evaluation procedure with preprocessed h36m data if I'm right. All I want to do is to see the result of SemGCN on a random video file.

Regards, Saichand

saichanda avatar Mar 30 '20 06:03 saichanda

Hi @garyzhao, This repo is indeed of good quality and help. Thank you for this repo. What are the possible steps to be taken to do an inference on SemGCN, using this repo? You have given here the evaluation procedure with preprocessed h36m data if I'm right. All I want to do is to see the result of SemGCN on a random video file.

Regards, Saichand

Hi @saichanda ,

Thanks for your interest in our work!

To test on a video file, you need to run HG for each frame to get all 2D joint positions, and then feed them one by one into SemGCN. You could reuse most of the code in this repo.

Best, Long

garyzhao avatar Apr 04 '20 22:04 garyzhao

Hi @garyzhao , Thank you for your kind reply. I totally understand the steps you have mentioned for addressing any custom video files. Could you also throw some light on supporting other 2D prediction models other than mentioned in the repo? Is there a way to use the other 2D prediction models( in this repo).? Did you experiment with other 2D prediction models?

Thank you again for your time.

saichanda avatar Apr 09 '20 06:04 saichanda

Hi @garyzhao , Thank you for your kind reply. I totally understand the steps you have mentioned for addressing any custom video files. Could you also throw some light on supporting other 2D prediction models other than mentioned in the repo? Is there a way to use the other 2D prediction models( in this repo).? Did you experiment with other 2D prediction models?

Thank you again for your time.

Hi @saichanda ,

You may check https://github.com/facebookresearch/VideoPose3D/blob/master/DATASETS.md

They provide some results of different 2D prediction models, which are compatible with our code. It may be a good starting point for your questions.

Best, Long

garyzhao avatar Apr 12 '20 18:04 garyzhao

Hi @garyzhao , Thank you for your kind reply. I totally understand the steps you have mentioned for addressing any custom video files. Could you also throw some light on supporting other 2D prediction models other than mentioned in the repo? Is there a way to use the other 2D prediction models( in this repo).? Did you experiment with other 2D prediction models? Thank you again for your time.

Hi @saichanda ,

You may check https://github.com/facebookresearch/VideoPose3D/blob/master/DATASETS.md

They provide some results of different 2D prediction models, which are compatible with our code. It may be a good starting point for your questions.

Best, Long

Hi @garyzhao , Thank you for the support.

Regards, Saichand

saichanda avatar Apr 13 '20 04:04 saichanda

@saichanda Hi, did you try this repository for any random/wild video/image?

lisa676 avatar Jun 28 '20 09:06 lisa676

I did try this project for random video files. But requires preprocessing work.

  1. we got 2D predictions from one of our hour glass detections based model by feeding the input random video.
  2. Preprocessed the 2D kps into required h36m format using prepare_data_2d_h36m_sh.py https://github.com/garyzhao/SemGCN/blob/master/data/prepare_data_2d_h36m_sh.py
  3. This gives the right input to be fed to the SemGCN repo. I hope this helps. with regards, Saichand A V R P

On Sun, Jun 28, 2020 at 2:31 PM Programming lover [email protected] wrote:

@saichanda https://github.com/saichanda Hi, did you try this repository for any random/wild video/image?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/garyzhao/SemGCN/issues/15#issuecomment-650721277, or unsubscribe https://github.com/notifications/unsubscribe-auth/AE4GUTYRYOQUE42MZ6HFQXTRY4BIFANCNFSM4JYYKGHA .

saichanda avatar Jun 30 '20 12:06 saichanda

I did try this project for random video files. But requires preprocessing work. 1. we got 2D predictions from one of our hour glass detections based model by feeding the input random video. 2. Preprocessed the 2D kps into required h36m format using prepare_data_2d_h36m_sh.py https://github.com/garyzhao/SemGCN/blob/master/data/prepare_data_2d_h36m_sh.py 3. This gives the right input to be fed to the SemGCN repo. I hope this helps. with regards, Saichand A V R P On Sun, Jun 28, 2020 at 2:31 PM Programming lover @.***> wrote: @saichanda https://github.com/saichanda Hi, did you try this repository for any random/wild video/image? — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub <#15 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AE4GUTYRYOQUE42MZ6HFQXTRY4BIFANCNFSM4JYYKGHA .

Hi, could you tell me which project you use to get the 2D predictions? It is video3D ?

sulei1998 avatar Jul 12 '20 14:07 sulei1998

Hi @sulei1998, We have used HourGlass repo for 2D predictions.

saichanda avatar Jul 13 '20 03:07 saichanda

hi @saichanda , Did you train your own hourglass model or did you use their pretrained one? Because i noticed that their pretrained model performs pretty inaccurate

NielsOsnabrugge avatar Sep 15 '20 11:09 NielsOsnabrugge

Hi @NielsOsnabrugge , As an experimental study and for the want of time, we didn't train our own hourglass model for the inference (to test the repo). The goal was to make the SemGCN work. Hence, we followed the desired configuration. Accuracy was not the main focus though. Of course, we moved on with better 2D custom predictions for an accurate SemGCN output.

saichanda avatar Sep 18 '20 02:09 saichanda

@saichanda thankyou for your reply.

I do have one last question. I converted my 2D data from Hourglass to the required h36m format. I inferenced the linear model using the data and get 3D predictions back. However, if i plot all the outputed coordinates it looks like a complete mess. Do you know if i need to convert the outputed 3D coordinates first?

An example of outputed coordinates is as follows : afbeelding

Connecting the dots (as described in this issue : https://github.com/garyzhao/SemGCN/issues/7) results in the following, interesting 3D pose

afbeelding

Another example of a 3D pose estimation : afbeelding

where the 2d input looks like this : afbeelding

I am using the ckpt_linear_sh.pth.tar model (Martinez et al. [1], SH (fine-tuned)) where my input is a tensor containing all 16x2 keypoints in the h36m format. The output is 15x3 where i add the hip (0,0,0) coordinate afterwards.

NielsOsnabrugge avatar Sep 21 '20 10:09 NielsOsnabrugge

Also tagging @garyzhao in case he knows the answer.

NielsOsnabrugge avatar Sep 21 '20 12:09 NielsOsnabrugge

Hour glass model doesn't give us spine point. Without spine point, we don't get correct output from SemGCN. Calculate the spine and map to h36m format. It should work. In your 2d input image, I don't see the spine point. Hourglass doesn't give spine, calculate it from hip and thorax

On Mon 21 Sep, 2020, 18:16 NielsOsnabrugge, [email protected] wrote:

Also tagging @garyzhao https://github.com/garyzhao in case he knows the answer.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/garyzhao/SemGCN/issues/15#issuecomment-696090394, or unsubscribe https://github.com/notifications/unsubscribe-auth/AE4GUT4G7FCJFGPUDYKKHB3SG5DJZANCNFSM4JYYKGHA .

saichanda avatar Sep 23 '20 03:09 saichanda

hi @saichanda, Thanks for your reply.

I have since tested it whith a spine point but the results are still bad. I do not need to convert the 3D output right? And the input is just the 16 points in h36m format correct? No need to normalize them or anything.

Because i am unsure where my mistake is. I have all 16 keypoints, inference them on the model. Get 3D keypoints out and when i display them they look nothing like the 2D pose

NielsOsnabrugge avatar Sep 23 '20 06:09 NielsOsnabrugge

It seems I had to normalize my 2D input. This was somewhat unclear at first. Now that i normalized the input I am getting good results

NielsOsnabrugge avatar Sep 25 '20 08:09 NielsOsnabrugge

hi @NielsOsnabrugge how to calculate the spine point from hip and thorax when the coordinator of hip is (0,0), thanks for advance.

dandingol03 avatar Dec 06 '20 08:12 dandingol03