llm icon indicating copy to clipboard operation
llm copied to clipboard

Hi, is there any plans for word embeddings?

Open hlhr202 opened this issue 2 years ago • 8 comments

Noob here, excuse me for my stupid feature request. I noticed that someone in llama.cpp is working on word embedding from hidden layers. I m just asking is there any possibility to implement an embedding mode for llama-rs? thx

What i found is this commit

hlhr202 avatar Mar 22 '23 06:03 hlhr202

Hi, I would like to add llama.cpp PR here for reference. just noticed they merged the embedding function https://github.com/ggerganov/llama.cpp/pull/282/files

hlhr202 avatar Mar 24 '23 17:03 hlhr202

Hi @hlhr202! :wave:

Thanks for bringing this to our attention. The code here doesn't look hard at all to port! We will add it to the repo since it makes sense to have a way for people to extract embeddings.

But I'd like to understand (just to satisfy my curiosity). Why are the LLaMA embeddings useful? Is this the same thing as regular word embeddings from any other model? That is, capture the semantics of a word as a vector to allow computing similarity metrics? Do you have a use case for extracting the embeddings that would help us understand the possibilities better? :smile:

Not saying this is a requirement for the PR, I just want to learn if there are different use cases for this that I'm not aware of.

setzer22 avatar Mar 24 '23 19:03 setzer22

Please check out #72. I implemented some code to extract embeddings, but we still need to validate if the results are correct, and how to best expose this to our different levels of API.

setzer22 avatar Mar 24 '23 20:03 setzer22

Hi @hlhr202! 👋

Thanks for bringing this to our attention. The code here doesn't look hard at all to port! We will add it to the repo since it makes sense to have a way for people to extract embeddings.

But I'd like to understand (just to satisfy my curiosity). Why are the LLaMA embeddings useful? Is this the same thing as regular word embeddings from any other model? That is, capture the semantics of a word as a vector to allow computing similarity metrics? Do you have a use case for extracting the embeddings that would help us understand the possibilities better? 😄

Not saying this is a requirement for the PR, I just want to learn if there are different use cases for this that I'm not aware of.

yes, computing semantic similarity is quite useful in many cases. it allow us to search sentences in similar semantic by using natural language query. btw i will help to simply verify the pr and quickly merge into my llama-node.

hlhr202 avatar Mar 25 '23 04:03 hlhr202

Please check out #72. I implemented some code to extract embeddings, but we still need to validate if the results are correct, and how to best expose this to our different levels of API.

@setzer22 thanks for your great work! I just did a simple test for computing cosine similarity, comparing llama-rs and openai's embedding function. not sure if it is accurate...

dog1: My favourite animal is the dog
dog2: I have just adopted a cute dog
cat1: My favourite animal is the cat

llama-rs model: ggml-alpaca-7b-int4

llama-rs cosine similarity:
dog1 vs dog2  ->  0.6884680986404419
dog1 vs cat1  ->  0.9326339960098267

openai model: text-embedding-ada-002

openai cosine similarity:
dog1 vs dog2  ->  0.8523955345153809
dog1 vs cat1  ->  0.9551568031311035

it looks like everything works well, but the resulting similarity is quite different from openai's text-embedding-ada-002. probably i will plan to run all the test in llama.cpp for another checking

hlhr202 avatar Mar 25 '23 11:03 hlhr202

It seems llama.cpp have not done embeddings yet. I try to print the embedding vectors, but got size 0.

hlhr202 avatar Mar 25 '23 15:03 hlhr202

@setzer22 sorry I reopened this ticket cuz I have noticed some changes from llama.cpp. And still I have tested a few examples on 7B alpaca but the results not very accurate (not sure if it is caused by small model size). what I v noticed from llama.cpp is that they were not using any end token as representation of sentence embedding, they put all prompt tokens into eval function, but always get a fixed length of vectors. image

hlhr202 avatar Apr 05 '23 02:04 hlhr202

@setzer22 I think our llama-rs implementation for embeddings may not be correct. what I v noticed from llama.cpp is that they were not using any end token as representation of sentence embedding, they put all prompt tokens into eval function, but always get a fixed length of vectors. image

another tricks i found, but i m not sure if their implementation make sense... I guess they just remove additional vector items and I even dont know if they drop the part correctly, quite weird. I will continue follow the issue in the following few weeks. I m going to have a test on 30B model to see if semantic accuracy is better than 7B alpaca. image

hlhr202 avatar Apr 05 '23 02:04 hlhr202

This should now be sorted / understandable with #273. Let me know if there's anything else.

philpax avatar May 24 '23 22:05 philpax