llama.cpp icon indicating copy to clipboard operation
llama.cpp copied to clipboard

llama : support reranking API endpoint and models

Open ciekawy opened this issue 1 year ago • 10 comments

Prerequisites

  • [X] I am running the latest code. Mention the version if possible as well.
  • [X] I carefully followed the README.md.
  • [X] I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
  • [X] I reviewed the Discussions, and have a new and useful enhancement to share.

Feature Description

Support reranking API and models.

Motivation

Reranking is currently very common techniques used along with embeddings in RAG systems. Also there are models where same model instance can be used for both embeddings and reranking - that is great resource optimisation.

Possible Implementation

Reranking is relatively close to embeddings and there are models for both embed/rerank like bge-m3 - supported by llama.cpp with --embed. I'm guessing that one possible challenge/dilemma is that for inference and embed the OpenAI API schema is being used and OpenAI does not offer rerank API. I think currently there is Jina rerank API commonly used in other projects. I think that in terms of actual reranking there should not be very complex as it is quite related to embedding calls.

ciekawy avatar Jul 18 '24 07:07 ciekawy

I saw just one discussion opened for reranking https://github.com/ggerganov/llama.cpp/discussions/8216, and possibly two loosely related tickets - linking for visibility

https://github.com/ggerganov/llama.cpp/issues/5403 https://github.com/ggerganov/llama.cpp/issues/5954

and so slightly related but rather out of scope of this ticket is also support for more formats to be converted to gguf

ciekawy avatar Jul 18 '24 07:07 ciekawy

I'm developing a lightweight (in terms of disk usage) local RAG application. Embedding/LLM is handled very well by llama.cpp, but reranker is a headable. My reranker of choice is 2GB in disk space (bge-reranker-v2-m3), which is bigger than embedding+LLM together. Huggingface's text-embedding-inference is fast, but it doesn't support any quatization (at least in an obvious way); infinity_emb supports onnx's int8 quantization but not lightweight. If llama.cpp supports reranker, I would definitely use it for all embedding/reranking/LLM.

rujialiu avatar Jul 18 '24 13:07 rujialiu

I am not familiar with the concept of "reranking" - do you have some good resource, or can you explain it in simple terms here?

ggerganov avatar Jul 18 '24 14:07 ggerganov

TL;DR Reranking involves taking a set of search results and reordering them based on a specific query to better match the query :)

here all is nicely described: https://jina.ai/reranker/

ciekawy avatar Jul 18 '24 14:07 ciekawy

We can also reduce token usage and hallucination by filtering out low-score documents before feeding to LLM, which is especially useful when developing tool-using agents: suppose you have 1000 built-in tools and don't want to pass all of them to LLM, then a good way is to use embedding to get, say, top-30 similar tools first and then use reranker to retrieve highly relavent tools only. Embedding + vector search is fast, but much less accurate than reranker, so this embedding+reranker+LLM workflow works very well in practice.

rujialiu avatar Jul 19 '24 01:07 rujialiu

FYI: chatllm.cpp supports 2 re-ranker models, and RAG of course.

foldl avatar Jul 21 '24 15:07 foldl

Re-ranking models output a score on a pair of a question and a text chunk, measuring how the chunk fit as an answer.

foldl avatar Jul 21 '24 15:07 foldl

Got it. I assume there are some special tokens that are used to specify which text is the question and which text is the answer? And it seems that instead of a LM head, the model ends with a classification head. Is the attention non-causal?

ggerganov avatar Jul 22 '24 08:07 ggerganov

In the case of XLMRobertaForSequenceClassification, used by bge-rereanker-m3, bce-reranker, etc, Q&A are encoded as:

~bos question eos bos answer eos~

It is non-causal.


The correct pattern is:

cls question sep sep answer sep

which is equivalent to in the case of BGE and BCE:

bos question eos eos answer eos

foldl avatar Jul 22 '24 08:07 foldl

it may be worth having a look at the actual rerankers, and their config files

  • https://huggingface.co/BAAI/bge-reranker-v2-m3
  • https://huggingface.co/jinaai/jina-reranker-v2-base-multilingual

ciekawy avatar Jul 22 '24 10:07 ciekawy

I'll give this a try

ggerganov avatar Sep 16 '24 11:09 ggerganov

@ggerganov I am running bge-rerank-v2-m3 model with the llama.cpp server b4641. The rerank api is working, but the score seems weird. it always returns "1" for the most matching one, that is not expected. Is this by design, or something wrong with my configuration?

thiner avatar Feb 05 '25 11:02 thiner

Can you provide the commands that you are using?

This works for me:

./bin/llama-server \
    -m ../models/bge-reranker-v2-m3/ggml-model-f16.gguf \
    -c 65536 -np 8 -b 8192 -ub 8192 -fa \
    --host 127.0.0.1 --port 8012 -lv 1 \
    --reranking

ggerganov avatar Feb 05 '25 11:02 ggerganov

I copied your settings, and tested the API with below request:

{
    "model": "bge-reranker",
    "query": "A man is eating pasta.",
    "documents": [
        "A man is eating food.",
        "A man is eating a piece of bread.",
      "一个中国男人在吃面条",
        "The girl is carrying a baby.",
        "A man is riding a horse.",
        "A young girl is playing violin."
    ]
  }

The response:

{
  "id": null,
  "results": [
    {
      "index": 0,
      "relevance_score": 7.741800308227539
    },
    {
      "index": 1,
      "relevance_score": -2.33689022064209
    },
    {
      "index": 2,
      "relevance_score": 3.8466310501098633
    },
    {
      "index": 3,
      "relevance_score": -11.016427993774414
    },
    {
      "index": 4,
      "relevance_score": -10.9613037109375
    },
    {
      "index": 5,
      "relevance_score": -11.018434524536133
    }
  ],
  "meta": null
}

This seems normal. I saw the score "1" from Dify which is the system using the reranker model. It's very possible the problem of Dify. Thanks for your help.

BTW, most of the rerank API returns a score in range from 0 to 1. Can llama.cpp server implement this feature?

thiner avatar Feb 06 '25 03:02 thiner

We can, I just thought it is something very simple that the clients can do on their end. But it's fine to have an option to do it on the server. PRs welcome.

ggerganov avatar Feb 06 '25 07:02 ggerganov

@thiner FYI: You can apply the sigmoid function on relevance_score to get a score in range from 0 to 1: f(x) = 1 / (1 + exp(-x)).

foldl avatar Feb 06 '25 07:02 foldl

@foldl thanks for your advice. But I program in Java only... Could you kind help to create a PR?

thiner avatar Feb 06 '25 08:02 thiner

It's something like this in Java.

public static double sigmoid(double x) {
    return 1.0 / (1.0 + Math.exp(-x));
}

foldl avatar Feb 06 '25 09:02 foldl

@foldl Thanks, but I meant could you implement the feature for llama.cpp server? Just like @ggerganov mentioned, maybe a new argument for starting llama.cpp server to enable the feature. I am using this rerank API in Dify, I am not able to(or not the correct way to) modify the source code of Dify.

thiner avatar Feb 07 '25 10:02 thiner

@thiner Before a PR for this is landed, you can try scripting in Dify:

https://docs.dify.ai/guides/workflow/node/code

foldl avatar Feb 07 '25 11:02 foldl

I copied your settings, and tested the API with below request:

{ "model": "bge-reranker", "query": "A man is eating pasta.", "documents": [ "A man is eating food.", "A man is eating a piece of bread.", "一个中国男人在吃面条", "The girl is carrying a baby.", "A man is riding a horse.", "A young girl is playing violin." ] } The response:

{ "id": null, "results": [ { "index": 0, "relevance_score": 7.741800308227539 }, { "index": 1, "relevance_score": -2.33689022064209 }, { "index": 2, "relevance_score": 3.8466310501098633 }, { "index": 3, "relevance_score": -11.016427993774414 }, { "index": 4, "relevance_score": -10.9613037109375 }, { "index": 5, "relevance_score": -11.018434524536133 } ], "meta": null } This seems normal. I saw the score "1" from Dify which is the system using the reranker model. It's very possible the problem of Dify. Thanks for your help.

BTW, most of the rerank API returns a score in range from 0 to 1. Can llama.cpp server implement this feature?

How can you use the call API?

lehoangh avatar Jul 10 '25 09:07 lehoangh

I copied your settings, and tested the API with below request: { "model": "bge-reranker", "query": "A man is eating pasta.", "documents": [ "A man is eating food.", "A man is eating a piece of bread.", "一个中国男人在吃面条", "The girl is carrying a baby.", "A man is riding a horse.", "A young girl is playing violin." ] } The response: { "id": null, "results": [ { "index": 0, "relevance_score": 7.741800308227539 }, { "index": 1, "relevance_score": -2.33689022064209 }, { "index": 2, "relevance_score": 3.8466310501098633 }, { "index": 3, "relevance_score": -11.016427993774414 }, { "index": 4, "relevance_score": -10.9613037109375 }, { "index": 5, "relevance_score": -11.018434524536133 } ], "meta": null } This seems normal. I saw the score "1" from Dify which is the system using the reranker model. It's very possible the problem of Dify. Thanks for your help. BTW, most of the rerank API returns a score in range from 0 to 1. Can llama.cpp server implement this feature?

How can you use the call API?

What did you mean? If you were asking how to call the API, it's a standard API provided by llama.cpp server. You can find it here: https://github.com/ggml-org/llama.cpp/blob/master/tools/server/README.md#post-reranking-rerank-documents-according-to-a-given-query

thiner avatar Jul 11 '25 03:07 thiner