Open-Assistant
Open-Assistant copied to clipboard
Compute text-embeddings for incoming meassges via HF feature-extraction pipeline
We want to store an embedding together with each message in the DB to measure similarity and diversity (e.g. to detect (near-)duplicates).
- Select a model to use for the embedding-calculation e.g. see https://huggingface.co/models?pipeline_tag=feature-extraction&sort=downloads (MiniLM & LaBSE were mentioned in internal discussions), potentially discuss with ML-Team on discord, a multilingual model would be preferred, in doubt choose a popular one
- Find way to store the embedding vector as postgres array via SQLModel (maybe like shown here?) and add a new (nullable)
<short_modelname>_embedding
column to store the embedding of message-text, create alembic update script - Use the HuggingFaceAPI class to make an asycn web call for each incoming message and store the embedding in the db. in case of an exception store NULL in the embedding field (successfully store the message anyway).
- Create a new debug-flag in the backend settings class (default False) that allows to disable the embedding-calculations. Se the env-variable to True in the
scripts/backend_development/run-local.sh
script.
(Non-collaborators: Please leave a comment if you want to work on this task. Someone will then assign the task to you.)
I would take a look at this!
Would it make sense to save the embeddings in a new table? My thinking is that with a new table with the columns message_id
, model_name
& embedding
we could simply store multiple embeddings and experiment with different models.
I think I could also take a look at this one, as it is related to classification of the messages in HF.
I would take a look at this!
Would it make sense to save the embeddings in a new table? My thinking is that with a new table with the columns
message_id
,model_name
&embedding
we could simply store multiple embeddings and experiment with different models.
Having a new table would make sense to me to minimise schema changes on new models
@SummerSigh see this issue. Similar to embedders we are building for safety. Let's all keep in contact re this so we can cross use stuff @jojopirker.
I'll ping you guys in the discord channel :) @ontocord
@ontocord Ok! Sounds good!
if I understand correctly, this was solved in #540