vespa
vespa copied to clipboard
Late Chunking (https://arxiv.org/pdf/2409.04701)
Is your feature request related to a problem? Please describe. Practitioners often split text documents into smaller chunks and embed them separately. However, chunk embeddings created in this way can lose contextual information from surrounding chunks, resulting in sub-optimal representations
Describe the solution you'd like Most likely a new embedder or an option on the Huggingface embedder would need to be implemented to support this
I could try to implement this, if this sounds interesting to you
That would be great!
One challenge is modeling the chunking strategy and determining if mapping from a chunk embedding to a span in the original text should be possible. The paper uses different chunk-splitting methods, but even with the fixed number of tokens (e.g., 256), the user needs to implement the mapping between the chunk and the span in the longer text if we represent late-chunking in the schema like other embedders:
schema doc {
document doc {
field longtext type string {.. }
}
field chunk_embeddings type tensor<float>(chunk{}, v[1024]) {
indexing: input longtext | embed late-chunker-id | attribute | index
}
}
We have similar problems with the Colbert embedder, a similar concept, but where there is no pooling operation and each token becomes a vector.
A nice overview of the method from the paper