About PFC implementation and eval result
Thanks for the amazing work, authors. I am trying to reproduce the result reported in the paper and have two questions.
-
What does the constant
10000mean here in PFC's implementation? I can't map this to the formulation (10) in the paper. https://github.com/Stanford-TML/EDGE/blob/17c3428669ed6733edd9d8c66f7dc62060b8e46d/eval/eval_pfc.py#L50 -
I find it hard to reproduce the PFC result 1.5363 in the paper. Do you only use the test dataset of AIST++ to compute PFC? and what do you mean here in README to Generate ~1k samples? There are only 20 data and will be 186 pieces after slice in the test dataset.
Has anyone successfully reproduced the results? 🤔
Has anyone figure it out? Need an expression about the meaning of "Generate ~1k samples" in README
@MingCongSu , hi, I also face this problem. And I run the test.py with two settings for data/test: use cache jukebox feats and use music wav directly. And I get two different pfc metrics, one is 1.5922 while the other one is 0.9621.
I assume that the generated results is not fixed ? So it leads to the dynamic results?
I don't think so. The pfc of GT (1.332) could be reproduced by "../test/motions_sliced", but when u used "../test/wavs_sliced" as input(ckpt download in github), pfc came to ~1.29, which is really confused.