helen ngo
helen ngo
@lvwerra The bit which actually does the validation is `next(x)`, which gets the first element from the loaded YAML (fine because for our metric cards the YAML is always at...
Hi @kadirnar, thanks for contributing to `evaluate`! Since this is a community metric, do you mind reflecting these changes on the bary_score repo directly [here](https://huggingface.co/spaces/lvwerra/bary_score/tree/main)? It would be great to...
Thanks @BramVanroy, this is cool! Will add to my list to review mid-next week :)
Thanks @lvwerra, have added all those suggestions! If the API looks good to you now I'll add some tests assuming this format :)
Thanks for the pointer re: `train_eval_index`! Yes, we can definitely use the existing `train_eval_index` by default if nothing is specified and overwrite with the JSON config if needed. Will incorporate.
OK, I had a look at how `train_eval_index` is formulated and I'm having a bit of trouble reconciling how to get all the features we need out of it. After...
okok will do! That makes sense re: markdown and config, thanks @lhoestq :) In that case I'll keep the EvaluationSuite config. I appreciate you bringing up the `train_eval_index` though because...
Thanks @lvwerra, this is super helpful for understanding your proposal! `model_or_pipeline` was mirroring the arg which gets passed to `evaluator.compute()`, but we can move that out of the subtask and...
Superceded by #337!
~~Good call. It seems like the Hub repo pings the `/api/validate-yaml` endpoint, which is a Gitaly hook (?)~~