prompttools icon indicating copy to clipboard operation
prompttools copied to clipboard

Add common benchmarks

Open steventkrawczyk opened this issue 1 year ago • 9 comments

🚀 The feature

We need to add benchmark test sets so folks can run on models / embeddings / systems

A few essentials:

  • BEIR for information retrieval
  • MTEB for embeddings
  • Some stuff from HELM (e.g. ROGUE, BLEU) for LLMs

Motivation, pitch

Users have told us that they want to run academic benchmarks as "smoke tests" on new models.

Alternatives

No response

Additional context

No response

steventkrawczyk avatar Aug 01 '23 22:08 steventkrawczyk

Can I work on this?

LuvvAggarwal avatar Aug 04 '23 18:08 LuvvAggarwal

@LuvvAggarwal Sure thing. The scope of this one is a bit large because we currently don't have any common benchmarks. I think a simple case would be the following

  • Add a new benchmarks directory to prompttools
  • Add a python file to read in a test dataset given some filepath (probably from CSV format)
  • Add a utility function to compute the relevant metric from the responses
  • Add a dataset to use for the benchmark to a new directory, e.g. prompttools/data
  • Add an example notebook that runs the benchmark and computes the metric

Some benchmarks to start with would be HellaSwag and TruthfulQA, or perhaps simpler ones like ROGUE and BLEU

Feel free to deviate from this plan, it's just a suggestion for how to get started.

steventkrawczyk avatar Aug 04 '23 18:08 steventkrawczyk

Thanks @steventkrawczyk for the guidance, based on my initial research I have found a package "Evaluate:" that can provide the methods for evaluating the model Link to package: https://huggingface.co/docs/evaluate/index I was thinking to use it.

Please free to suggest better ways as I am new to ML stuff but love to contribute

LuvvAggarwal avatar Aug 05 '23 08:08 LuvvAggarwal

@steventkrawczyk, can we use the "Datasets" library for loading metrics dataset instead of creating a separate directory Link to the library: https://github.com/huggingface/datasets

And it can also be used for quick tests on a prebuilt dataset

LuvvAggarwal avatar Aug 06 '23 08:08 LuvvAggarwal

@LuvvAggarwal using datasets sounds like a good start. As far as using evaluate, we want to write our own eval methods that support more than just huggingface (e.g. OpenAI, Anthropic)

steventkrawczyk avatar Aug 06 '23 19:08 steventkrawczyk

@steventkrawczyk Sure, but I have no idea about eval methods it would be great if you can share any references so I could code. Thanks

LuvvAggarwal avatar Aug 07 '23 07:08 LuvvAggarwal

For example, if you are using the hellaswag dataset, we need to compute the accuracy of predictions, e.g. https://github.com/openai/evals/blob/main/evals/metrics.py#L12

steventkrawczyk avatar Aug 07 '23 14:08 steventkrawczyk

@LuvvAggarwal I kick started the code for benchmarks here if you would like to branch: https://github.com/hegelai/prompttools/pull/72

HashemAlsaket avatar Aug 12 '23 17:08 HashemAlsaket

Thanks @HashemAlsaket, I will branch it

LuvvAggarwal avatar Aug 14 '23 05:08 LuvvAggarwal