Finalizing MMTEB
This issue is to get an overview of what needs to be done before MMTEB can be finalized.
- Adding the last remaining datasets, notably:
- [x] #641 #830
- [x] #718
- [x] #642 #833
- Speeding up the benchmark
- [x] I believe we are only missing: #660
- see also #836
- see also #838
- see also #835
- #705 (partly depends on 1, 2 as well as #879)
- Figuring out #752 (partly depends on 3)
- Deciding on meaningful benchmark subsets (depends on 3)
- see #837
- #896 (depends on 3, 4 and 5) (see also #595)
- see #839
- Updating leaderboard to new format https://github.com/embeddings-benchmark/mteb/discussions/674 (depends on 3-6)
Is there anything else that is needed?
Construction of MMTEB-Lite? It will be a faster version of MMTEB. Two approaches that come to mind for implementing this are -
- Reducing the size document set of some retrieval benchmarks.
- Reducing the number of tasks
Hey @KennethEnevoldsen I'd like to merge also this dataset in #773. 3 reasons: a) we don't seem to have brazilian dialect represeted, b) multilabel task doesn't have large language coverage c) I had it prepared for long time, but multilabel task got only merged last week when I was away. We only need to address a problem with stratification of the splits there.
@vaibhavad yes, def. we need to construct the benchmarks and ideally think about downsampling some of the larger retrieval datasets. A solution might be to implement a downsample function for retrieval tasks.
Thanks @dokato - let us get it merged in as well. Looks to be in a reasonable state
Hey @KennethEnevoldsen I read the list and I think I can help in Running models https://github.com/embeddings-benchmark/mteb/discussions/705
@KennethEnevoldsen Is there anything meaningful new contributors can help with?
Hi @jordiclive! I believe there are multiple avenues to take, but any of the outlines paper segment I believe is meaning (see the updated post above), implementing model (see e.g. #845, will finish it up either Monday or in the weekend), or starting work on 8)
quick question: is there a script to select & run all MMTEB tasks? I'm a bit unclear about the difference between current development progress and how MMTEB is different from the current MTEB (in different languages).
Best
Bo
@bwanglzu You can select benchmark like this:
import mteb
mteb.get_benchmark("MTEB(eng, classic)") # or get_benchmarks
Full list of bencharks here
Will close this issue as MMTEB has been submitted, moving the public preprint release over to https://github.com/embeddings-benchmark/mteb/issues/1405