TEMOS icon indicating copy to clipboard operation
TEMOS copied to clipboard

Evaluation with past research

Open wooheum-xin opened this issue 2 years ago • 3 comments

Hi dear authors, I would like to start by saying thank you for your amazing work. Did you re-implement past research(Lin et. al./ JL2P/ Ghosh et al.)? How can I evaluate them with your code?

wooheum-xin avatar May 11 '22 12:05 wooheum-xin

Hello @L190201301,

I will put more information in the README, in the next few weeks. (I may also distributes their motions.)

To tell you what I use for comparison with previsous works (while waiting for me to update the README):

  • For Lin et al:. as the code was not released, I use the reimplementation and pretrained model of Language2Pose (JL2P)
  • JL2P: code and pretrained model: https://github.com/chahuja/language2pose
  • Ghosh et al.: code and pretrained model: https://github.com/anindita127/Complextext2animation (this code is heavily based on JL2P)

To get the motions as npy files, I follow each README.md to do the installation, then I do:

  • python sample_wordConditioned.py => to get motion samples (still as rifke features)
  • To get proper xyz joints, I "hack" their render.py script, such that it saves motions in npy instead of rendering them. (I do it in this way, to make sure the conversion rifke => xyz is done correctly). I will explain it more clearly in the README.

I will also update the eval.py script, and upload the script to create a table with all the results.

Mathux avatar May 11 '22 13:05 Mathux

Hello, I also have a question with evaluation.

As far as I'm concerned, previous studies have not provided the results of the variable sequence length study.

Are all the results presented in the paper conducted using a fixed length?

dwro0121 avatar May 12 '22 12:05 dwro0121

Hello,

That's a very good question. Actually, I am not doing something ideal, we can discuss a bit if you think of something better.

After generating motions (from any method), for each sequence inthe test set, I load the GT motion and the generated one. Then, I calculate the maximum number of frames in common (minimum length of both), and compute the metrics (APE root joints etc) on those frames. (This is an average, so sometimes it will compute the metrics with less elements).

TEMOS always generates motions of appropriate length, as this is one of the inputs to the model (all poses are generated in one pass). Previous work are generally auto-regressive, and are trained to generate a fixed number of poses at a time (which requires several passes through the model). When I evaluate, I take what they generate.

If you are interested in the code, you can check it out:

  • Compute the min length: https://github.com/Mathux/TEMOS/blob/ea12cf6b22122aa5be95bbd75fcc374c0f42398a/eval.py#L177
  • Compute the metrics: https://github.com/Mathux/TEMOS/blob/ea12cf6b22122aa5be95bbd75fcc374c0f42398a/temos/model/metrics/compute.py#L82

Mathux avatar May 12 '22 13:05 Mathux

Hi,

I update the README.md. You can use the command line: bash prepare/download_previous_works.sh to download the motion generated from previous work. Then python evaluate.py folder=previous_work/ghosh to evaluate on ghosh et al.

Mathux avatar Nov 08 '22 07:11 Mathux