Evaluation Compare to OmniControl
Thank you for making your amazing work open source.
I have a question about the evaluation results in Table 1. Are the evaluations in Table 1 the same for OmniControl? The numbers for OmniControl match, but the results for GMD and PriorMDM are different. Could you please explain these discrepancies?
OmniControl: [Table 1]
InterControl: [Table 1]
We run PriorMDM and GMD's code by ourselves. As our paper is a concurrent work to omnicontrol (Nov 2023 vs Oct 2023 by the first arxiv version date), we do not have access to their code when we did this work. So we directly copy their result to our table in the updated arxiv version. Besides, if you check the github code's release date, our code is earlier than omnicontrol's.
Thank you for clarification.
Hi, and thank you again for open-sourcing your amazing work.
I have a hard time replicate the evaluation on Table 1.
- Can you provide the command to run the evaluation to achieve the FID score of 0.159 as reported in Table 1?
- Specifically, I’m unsure whether the evaluation was conducted using only 5 key frames as the conditioning, as mentioned in your response to Issue #3, or if it was done using all frames with the
--mask_ratio 1setting, as shown in the README.