llm
llm copied to clipboard
Hi, is there any plans for word embeddings?
Noob here, excuse me for my stupid feature request. I noticed that someone in llama.cpp is working on word embedding from hidden layers. I m just asking is there any possibility to implement an embedding mode for llama-rs? thx
What i found is this commit
Hi, I would like to add llama.cpp PR here for reference. just noticed they merged the embedding function https://github.com/ggerganov/llama.cpp/pull/282/files
Hi @hlhr202! :wave:
Thanks for bringing this to our attention. The code here doesn't look hard at all to port! We will add it to the repo since it makes sense to have a way for people to extract embeddings.
But I'd like to understand (just to satisfy my curiosity). Why are the LLaMA embeddings useful? Is this the same thing as regular word embeddings from any other model? That is, capture the semantics of a word as a vector to allow computing similarity metrics? Do you have a use case for extracting the embeddings that would help us understand the possibilities better? :smile:
Not saying this is a requirement for the PR, I just want to learn if there are different use cases for this that I'm not aware of.
Please check out #72. I implemented some code to extract embeddings, but we still need to validate if the results are correct, and how to best expose this to our different levels of API.
Hi @hlhr202! 👋
Thanks for bringing this to our attention. The code here doesn't look hard at all to port! We will add it to the repo since it makes sense to have a way for people to extract embeddings.
But I'd like to understand (just to satisfy my curiosity). Why are the LLaMA embeddings useful? Is this the same thing as regular word embeddings from any other model? That is, capture the semantics of a word as a vector to allow computing similarity metrics? Do you have a use case for extracting the embeddings that would help us understand the possibilities better? 😄
Not saying this is a requirement for the PR, I just want to learn if there are different use cases for this that I'm not aware of.
yes, computing semantic similarity is quite useful in many cases. it allow us to search sentences in similar semantic by using natural language query. btw i will help to simply verify the pr and quickly merge into my llama-node.
Please check out #72. I implemented some code to extract embeddings, but we still need to validate if the results are correct, and how to best expose this to our different levels of API.
@setzer22 thanks for your great work! I just did a simple test for computing cosine similarity, comparing llama-rs and openai's embedding function. not sure if it is accurate...
dog1: My favourite animal is the dog
dog2: I have just adopted a cute dog
cat1: My favourite animal is the cat
llama-rs model: ggml-alpaca-7b-int4
llama-rs cosine similarity:
dog1 vs dog2 -> 0.6884680986404419
dog1 vs cat1 -> 0.9326339960098267
openai model: text-embedding-ada-002
openai cosine similarity:
dog1 vs dog2 -> 0.8523955345153809
dog1 vs cat1 -> 0.9551568031311035
it looks like everything works well, but the resulting similarity is quite different from openai's text-embedding-ada-002. probably i will plan to run all the test in llama.cpp for another checking
It seems llama.cpp have not done embeddings yet. I try to print the embedding vectors, but got size 0.
@setzer22 sorry I reopened this ticket cuz I have noticed some changes from llama.cpp. And still I have tested a few examples on 7B alpaca but the results not very accurate (not sure if it is caused by small model size). what I v noticed from llama.cpp is that they were not using any end token as representation of sentence embedding, they put all prompt tokens into eval function, but always get a fixed length of vectors.

@setzer22 I think our llama-rs implementation for embeddings may not be correct. what I v noticed from llama.cpp is that they were not using any end token as representation of sentence embedding, they put all prompt tokens into eval function, but always get a fixed length of vectors.
another tricks i found, but i m not sure if their implementation make sense... I guess they just remove additional vector items and I even dont know if they drop the part correctly, quite weird. I will continue follow the issue in the following few weeks. I m going to have a test on 30B model to see if semantic accuracy is better than 7B alpaca.

This should now be sorted / understandable with #273. Let me know if there's anything else.