t2v-transformers-models
t2v-transformers-models copied to clipboard
Direct tokenization
I had an issue with the t2v-transformers today:
I create embeddings using a sentence-transformers model. One time using the sentence-transformers python library and one time using the t2v-transformers container. The cosine distance of the vectors was up to 0.16.
@antas-marcin greatly and quickly helped me by suggesting setting "T2V_TRANSFORMERS_DIRECT_TOKENIZE=true". This reduced the cosine distance to almost 0.
When looking into what it does i noticed two things:
- It's a bit difficult to understand it in the code because "tokenize" has actually two meanings
- T2V_TRANSFORMERS_DIRECT_TOKENIZE is not very well documented but could theoretically be very important
Regarding 1:
Tokenize in the context of this program means splitting the input into sentences and using the transformers tokenizer.
I suggest changing direct_tokenize
to shall_split_in_sentences
or something similar. Actually shall_embed_sentence_per_sentence
might even be more precise but that is a bit verbose. Other suggestions very welcome but its just the general idea.
Therefore the environment variable becomes T2V_SHALL_SPLIT_IN_SENTENCES
.
(see the commit)
Regarding 2: For me this setting seems to be important and should be documented somewhere. I don't know how to suggest edits for the documentation so I am writing down what I think what would be helpful here:
Environment Settings T2V_SHALL_SPLIT_IN_SENTENCES: If not set, will use true. If set to false, use raw input.
By default all t2v-transformers split the input into sentences using nltk with english interpunctuation and calculates the mean over the sentence embeddings. This allows to embed inputs of arbitrary length. But it will produce unexpected results if your text does not have the expected interpunctuation. Embedding on a per sentence level could at least theoretically degrade the embedding model's performance in case it produces better results with longer inputs.
(Also could this be significantly slower? Doing it sentence by sentence than doing a larger input at once?).