is there any possible to public trainning and evaluation detail?
Hi,great job for you.I read the paper.Since it can serve as a baseline, I want to know is there any possible to public your trainning and evaluation details? Thx for your contribution!
Hi, the training and evaluation code are coming soon! I'm working on making them ready for release.
Best, Jason
Hello, I am also trying to test the inference on my own video dataset. Is there a time you expect to be able to release code to use the human dynamics code on our own video dataset?
If the release isn't expected to be ready for a while, can you list the preprocessing steps for video data to be compatible with the existing demo.py. In particular, how do I generate the labels [x, y, visibility]?
Hi,
In order to run on your own videos, you need to have a video tracklet cropped around a person. The easiest way to do this is to compute bounding boxes from 2D keypoints, hence [x, y, visibility] where (x, y) are pixel coordinates and visibility is a visibility indicator or confidence value. If you have ground truth 2D keypoints, you should be able to directly plug them in (don't worry about the keypoint order since they're only used to get a bounding box, not fed into model).
If you don't have ground truth keypoints or don't want to worry about writing the interface, I've updated the demo so that it works with 2D tracks predicted from PoseFlow. The README should have instructions on building the dependencies.
Best, Jason
Hi, thanks for your great work. I am wondering about when could we see the Training Code since i am really interested in this job, and want to implement this work on my own.