FUDGE icon indicating copy to clipboard operation
FUDGE copied to clipboard

Evaluation config

Open chavincc opened this issue 1 year ago • 0 comments

Hello.

I am trying to run an evaluation, or the eval.py file for benchmarking. The problem is that I am not sure how to config metric to get the evaluation score. Every pretrained and example config seems to have "metrics": [], and eval.py log nothing when run.

How do I config the metric? and what are the metrics available options?

Thanks. Appreciate any help.

chavincc avatar Oct 02 '23 09:10 chavincc