google-research icon indicating copy to clipboard operation
google-research copied to clipboard

RepNet Quva-Repetition Benchmarks

Open hello-fri-end opened this issue 1 year ago • 0 comments

Hey @debidatta and RepNet team, hope you are all doing well. First of all, thanks for this super cool model and the detailed collab notebook :fire: .I've been exploring the model using both the official Collab notebook and this CLI implementation, which conveniently translates the Collab Notebook code into a Python command-line interface for seamless experimentation. I'm specifically interested in evaluating the model on the QUVA Repetition data set as the paper did. However, when I benchmark the model with the default parameters, the results obtained are as follows:

MAE_ERROR: 0.31
OBO_ERROR: 0.55

In contrast, the original paper reports the following metrics for the QUVA Repetition Dataset:

latest-screenshot-1

Notably, there is a substantial disparity in the OBO_ERROR, with 55% mispredicted examples in my experiments. I am curious to know if any manual-tuning of the inference parameters (e.g., strides, periodicity_threshold) was performed for each example in the original paper.

If possible, could you provide insights into the tuning process? Here's my benchmarking script in case you're interested in taking a look.

Looking forward to your reply.

Thanks Anwaar

hello-fri-end avatar Nov 19 '23 09:11 hello-fri-end