skeleton-based-action-recognition
skeleton-based-action-recognition copied to clipboard
How can it be used for real time recognition?
I have some queries:
- How do you load your model for inference in real time?
- Can we use this for developing with my custom dataset?
Hi,
Thanks for your interest in my work. The code is integrated in OpenDR toolkit and a demo is available there: https://github.com/opendr-eu/opendr/blob/master/projects/perception/skeleton_based_action_recognition/demos/demo.py
In order to train the models on a custom-skeleton dataset, you need first to employ lightweight openpose method to extract body poses from the videos. We have a code for that in this directory: https://github.com/opendr-eu/opendr/blob/master/projects/perception/skeleton_based_action_recognition/demos/skeleton_extraction.py
Thanks for your reply. I implemented the work on my custom dataset and ran real time recognition. However, the fps is very low at around 2 fps. In your demo i can see the fps is quite good around 15-18 fps. I have couple of questions. I hope you can clear my queries.
- Can you provide information regarding the testing environment such as device specifications?
- Also, usually the GCN based paper on action recognition usually dont talk about the real time recognition which, according to me is very important for any vision based research or product. How can we know those model's significance in real time scenario?
- Another question is do we need to create graphs according to dataset or the pose estimation technique? Kinetics 400 uses openpose and its graph is structured accordingly. However, if i wan to use any other pose estimation tools such as posenet from tf, or blazepose mediapipe then do i need to create my own graph?
Thank you for your time. Cheers!!!