FUDGE
FUDGE copied to clipboard
Evaluation config
Hello.
I am trying to run an evaluation, or the eval.py
file for benchmarking.
The problem is that I am not sure how to config metric to get the evaluation score.
Every pretrained and example config seems to have "metrics": [],
and eval.py
log nothing when run.
How do I config the metric? and what are the metrics available options?
Thanks. Appreciate any help.