HALOs
HALOs copied to clipboard
Codes for Evaluating Generative Benchmarks
Thanks for sharing this awesome repo!
The paper reports results on MMLU, GSM8K, HumanEval and BigBench-Hard. It seems this repo does not contain the codes for evaluating on these benchmark currently. Could you also share these codes? It would be great to follow the exactly same evaluation steps when comparing with other alignment methods.
Thanks for your interest. Alpaca evals are standard instructions following the original repo: https://github.com/tatsu-lab/alpaca_eval The other non-LLM judged results, you may refer to this repo that downloads the data and has run scripts with different benchmarks: https://github.com/bigcode-project/bigcode-evaluation-harness