mteb
mteb copied to clipboard
Conversation Retrieval Task and TopiOCQA dataset
Checklist for adding MMTEB dataset
Reason for dataset addition:
- [ ] I have tested that the dataset runs with the
mteb
package. - [ ] I have run the following models on the task (adding the results to the pr). These can be run using the
mteb run -m {model_name} -t {task_name}
command.- [ ]
sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
- [ ]
intfloat/multilingual-e5-small
- [ ]
- [ ] I have checked that the performance is neither trivial (both models gain close to perfect scores) nor random (both models gain close to random scores).
- [ ] If the dataset is too big (e.g. >2048 examples), considering using
self.stratified_subsampling() under dataset_transform()
- [ ] I have filled out the metadata object in the dataset file (find documentation on it here).
- [ ] Run tests locally to make sure nothing is broken using
make test
. - [ ] Run the formatter to format the code using
make lint
. - [ ] I have added points for my submission to the points folder using the PR number as the filename (e.g.
438.jsonl
).
Based on the current changes, here are the scenarios now:
-
User provides just the
encode
and/orpredict
function: fallback to default implementation - history turns are concatenated for both bi-encoders and cross-encoders. -
User additionally provides
encode_conversations
function: converting conversation history to an embedding is done by the user provided function for bi-encoders. Cross-encoders still follow the same default implementation. i.e. history turn concatenation. -
User additionally provides
convert_conv_history_to_query
function: The user provided logic will be used in both bi-encoders and cross-encoders to convert conversation history to single input string.
Doing both 2. and 3. provides maximum flexibility and options to the user.
I like the plan to extend it into the cross-encoders! Can we collapse 2 and 3 into the same encode_conversations
though? I think they can both call the same function which will be the default concatenate and if then the user can extend to either average embedding, provide a query rewriter, etc.
Can we collapse 2 and 3 into the same encode_conversations though?
I thought about that but there are a couple of issues here.
- The goal of
encode_conversations
is to take as input a conversation history and output an embedding. This is not relevant to cross-encoders as cross encoders will jointly attend to some text representation of the conversation and the document, hence they will need a method for that. - If we use
encode_conversations
with two possible different output types (embedding for bi-encoder and text for cross-encoder), then users will not be able to submit one single model that runs on the entire benchamark.
What do you think about these concerns?
Summary from a brief chat we had, for note taking purposes.
We will keep both encode_conversations
and convert_conv_history_to_query
to allow for both averaging query embeddings and converting it to a text string (for cross-encoders).
I think all that's left is adding the results files and the points files @vaibhavad. Thanks for the help!
@orionw - This is ready to be merged! Thanks for all the support and feedback throughout the process
Thanks @vaibhavad! LGTM.
Two very minor comments/questions:
- Feel free to give yourself an additional 3-5 ish points or so for adding the conversational retrieval features if you want, it can go in the bug fixes category. This is more than just adding a dataset :)
- Are the retrieval numbers for TopiOCQA in the right ballpark for these relatively bad embedding models? Just wanted to check since nDCG is around 0.10 (which could be totally fine, but just wanted to confirm).
Thanks, I added 4 points under bug fixes. I am also working with @xhluca and conversational retrieval workflow may be changed slightly to include the role of the speaker as well (assistant, user etc), but it is best to take that up in a different PR.
Regarding the scores, they look okay to me. The corpus is 25 million passages, so the random score will be way worse than this. The models trained on conversational retrieval should get scores in 0.6-0.8 (ballpark). So 0.1 seems a reasonable score for a model which is not trained on this task but can still so some surface level matching. Also the first question of each conversation is a decontextualized questions, similar to many QA datasets that these models have been trained on. Hence, some performance may be coming from there.
Fell free to merge the branch :) Thanks again for all the help and support.
Awesome! I have a similar dataset (as @vaibhavad mentioned). Is it possible for me to request both of you (@orionw @vaibhavad) as reviewer for my next PR?