api-inference-community icon indicating copy to clipboard operation
api-inference-community copied to clipboard

Allow choice of similarity function for Sentence Similarity task

Open NimaBoscarino opened this issue 3 years ago • 8 comments

Some Sentence Transformers sentence similarity models need to use the dot-product for computing the similarity between embeddings, such as some of the MS-MARCO models on this page.

At the moment the Inference API uses pytorch_cos_sim, so it might be good to allow users to specify which similarity function to use via a parameter.

NimaBoscarino avatar Jul 11 '22 00:07 NimaBoscarino

Yes, this sounds like a good idea. This would require an additional parameter which I think it's ok. Feel free to open a PR If you want :) cc @Narsil

osanseviero avatar Jul 11 '22 08:07 osanseviero

Isn't there a setting or a config that could be used within sentence-transformers to know which similarity to use ?

It feels like this shouldn't be passed by users as they are likely to not know which operator to use.

Or is what you are implying is that the default operator is correct but sometimes you would want to override ?

Narsil avatar Jul 15 '22 14:07 Narsil

I also remember there was such a configuration, but while writing this pipeline, we were told that cos_sim had basically won. (FYI)

Narsil avatar Jul 15 '22 14:07 Narsil

For just computing sentence similarity there isn't a setting in ST, they just make you use the util.cos_sim or util.dot_score methods, like here: https://www.sbert.net/docs/usage/semantic_textual_similarity.html The dot-score method isn't documented on that page though, and it's really only mentioned in the MS-MARCO page.

the default operator is correct but sometimes you would want to override

Yup, pretty much! In their other methods, like for paraphrase mining, they have the score function set to cos_sim by default, and users can override it. I don't personally know how common using the dot-product is, but it looks like there are some use-cases https://www.reddit.com/r/MachineLearning/comments/pd6wjh/comment/haobugl

NimaBoscarino avatar Jul 15 '22 19:07 NimaBoscarino

I don't have a super strong opinion against making a parameter but I have multiple indicators that make me feel like it's not correct.

  • The widgets don't send parameters and I don't think it's super user friendly to make users choose (how is a random user supposed to know which one to use ? )
  • It seems to me that models are trained with a certain similarity function, so using the other one doesn't make sense. Allowing to choose seems like we're allowing users to shoot themselves in the foot by using the wrong function.

Do you think we could take upstream to fix this properly and store somewhere the information in some configuration ? That seems the cleanest way to me right now. What do you think ?

Again, we could just add the parameter

Narsil avatar Jul 19 '22 07:07 Narsil

Yes, since this information is really model-dependant, having it in https://huggingface.co/sentence-transformers/multi-qa-MiniLM-L6-cos-v1/blob/main/config.json (or config_sentence_transformers.json) makes more sense than adding a parameter for users to input

osanseviero avatar Jul 19 '22 07:07 osanseviero

That sounds good to me! config_sentence_transformers.json seems like an appropriate place for it, but at the moment ST isn't actually using it for anything other than version numbers for ST, transformers, and PyTorch, e.g. https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2/blob/main/config_sentence_transformers.json

I'll open up an issue on ST for storing the similarity function in that file, and for having the ST util methods use use it, to get input from Nils.

NimaBoscarino avatar Jul 19 '22 15:07 NimaBoscarino

Opened up an issue for this: https://github.com/UKPLab/sentence-transformers/issues/1643

I'll leave this issue open for now until there's a resolution on that ST issue, unless you think otherwise!

NimaBoscarino avatar Jul 20 '22 06:07 NimaBoscarino