Add Prompt Retrieval
We could add the prompt retrieval benchmark: https://arxiv.org/abs/2209.01975
You mean add it as a task, right? along with all the datasets mentioned in the paper?
As a benchmark I suspect, including all its datasets.
Also cc'ing @hongjin-su here who also knows mteb quite well & may be interested in adding / helping add this
Sure, I could add this!
The performance for prompt retrieval is measured by LLM results in downstream tasks. Back then, the paper used GPT-J. Should we switch to a more up-to-date model, e.g., Llama3-8B or mistral-7B?
The performance for prompt retrieval is measured by LLM results in downstream tasks. Back then, the paper used GPT-J. Should we switch to a more up-to-date model, e.g., Llama3-8B or mistral-7B?
If those newer models mean better evaluation results, then probably a good idea to switch!
I create a pr to include 10 tasks for prompt retrieval. Feel free to check it out!
Just to let others contributors know that the PR (#608) was never merged. We would still welcome this submission.