LLamaSharp
LLamaSharp copied to clipboard
[Feature Request] Support using embeddings as input
There used to be an issue asking for a way to use embeddings instead of tokens as input to generate the response. Now llama.cpp has supported using embeddings as input, as shown below.
typedef struct llama_batch {
int32_t n_tokens;
llama_token * token;
float * embd; // set this member and keep token as null will take embeddings as input
llama_pos * pos;
int32_t * n_seq_id;
llama_seq_id ** seq_id;
int8_t * logits;
llama_pos all_pos_0; // used if pos == NULL
llama_pos all_pos_1; // used if pos == NULL
llama_seq_id all_seq_id; // used if seq_id == NULL
} llama_batch;
Since there's already a binding of this struct in LLamaSharp, what needs to be done is to add an API for executors to accept embeddings as input.
I've been planning to look into this for a while, since it's required for the BatchedExecutor to support llava. My plan has been to create a new batch class (LlamaBatchEmbeddings). This would probably be a lot simpler than the existing batch (LlamaBatch).
That's great! I didn't notice that it's required for the BatchedExecutor to support llava so I set it as a good first issue
. You could split the task into some sub-tasks and set easy and not urgent ones as good first issue
, if you would like to. :)
#770 added a new LLamaBatchEmbeddings
which can be used to drive inference with embeddings instead of tokens. It can be used with any context (it's not tied to the BatchedExecutor
) by calling Decode
.