vespa icon indicating copy to clipboard operation
vespa copied to clipboard

Late Chunking (https://arxiv.org/pdf/2409.04701)

Open oskrim opened this issue 1 year ago • 3 comments

Is your feature request related to a problem? Please describe. Practitioners often split text documents into smaller chunks and embed them separately. However, chunk embeddings created in this way can lose contextual information from surrounding chunks, resulting in sub-optimal representations

Describe the solution you'd like Most likely a new embedder or an option on the Huggingface embedder would need to be implemented to support this

oskrim avatar Oct 20 '24 08:10 oskrim

I could try to implement this, if this sounds interesting to you

oskrim avatar Oct 20 '24 08:10 oskrim

That would be great!

bratseth avatar Oct 21 '24 06:10 bratseth

One challenge is modeling the chunking strategy and determining if mapping from a chunk embedding to a span in the original text should be possible. The paper uses different chunk-splitting methods, but even with the fixed number of tokens (e.g., 256), the user needs to implement the mapping between the chunk and the span in the longer text if we represent late-chunking in the schema like other embedders:

schema doc {
  document doc {
     field longtext type string {..  }
  }
  field chunk_embeddings type tensor<float>(chunk{}, v[1024]) {
     indexing: input longtext | embed late-chunker-id | attribute | index
 }
}

We have similar problems with the Colbert embedder, a similar concept, but where there is no pooling operation and each token becomes a vector.

A nice overview of the method from the paper image

jobergum avatar Dec 11 '24 12:12 jobergum