haystack-core-integrations icon indicating copy to clipboard operation
haystack-core-integrations copied to clipboard

Elasticsearch: support dense, sparse, hybrid with inference in Elasticsearch

Open maxjakob opened this issue 1 year ago • 9 comments

Summary and motivation

Elasticsearch offers multiple retrieval features including

  • approximate dense vector retrieval with embedding inference in Python or in Elasticsearch
  • exact dense vector retrieval with embedding inference in Python
  • sparse vector retrieval with embedding inference in Elasticsearch
  • hybrid retrieval (dense+BM25) with embedding inference in Elasticsearch

Other libraries such as LangChain already have all these options integrated. It would be great to also have them available in Haystack. Elastic is currently working on a Python package that will make the integration of these features easier. Here we want to discuss how to best make them available.

Questions

  • Does Haystack want to enable inference in Elasticsearch? The current design assumes that mapping from input string to embedding vector is done in Python before calling a retriever. With inference in Elasticsearch, this would change. For example, users could configure a dense vector model in Elasticsearch and then use input strings in Haystack.
  • The options mentioned above require different ways of indexing the data. How to best incorporate this requirement? The current document store abstraction kind of assumes that there is only one way of indexing.

Detailed design

Concrete proposal:

  1. ElasticsearchDocumentStore takes an argument retrieval_strategy similarly to how it is down in LangChain. Calls to write_documents make use of the retrieval strategy to know how to index the data.
  2. We add a number of different retrievers (ElasticsearchDenseVectorRetriever, ElasticsearchSparseVectorRetriever, ElasticsearchHybridRetriever, ...) that get initialized with an ElasticsearchDocumentStore. The retrieval strategy has to match the expectation of the individual retrievers. We check that the expectation is met upon initialization. For retrieving documents, the retrievers call a search method on the document store as this is the established pattern.

Checklist

If the request is accepted, ensure the following checklist is complete before closing this issue.

### Tasks
- [ ] The code is documented with docstrings and was merged in the `main` branch
- [ ] Docs are published at https://docs.haystack.deepset.ai/
- [ ] There is a Github workflow running the tests for the integration nightly and at every PR
- [ ] A label named like `integration:<your integration name>` has been added to this repo
- [ ] The [labeler.yml](https://github.com/deepset-ai/haystack-core-integrations/blob/main/.github/labeler.yml) file has been updated
- [ ] The package has been released on PyPI
- [ ] An integration tile has been added to https://github.com/deepset-ai/haystack-integrations
- [ ] The integration has been listed in the [Inventory section](https://github.com/deepset-ai/haystack-core-integrations#inventory) of this repo README
- [ ] There is an example available to demonstrate the feature
- [ ] The feature was announced through social media

maxjakob avatar Apr 29 '24 14:04 maxjakob

@anakin87 @silvanocerza Would be great to get your input here.

maxjakob avatar Apr 29 '24 14:04 maxjakob

I don't see why not to be fair, I'm not against this at all. Everything you wrote makes totally sense in my opinion.

silvanocerza avatar Apr 30 '24 10:04 silvanocerza

Are you going to handle the implementation of this? 👀

silvanocerza avatar Apr 30 '24 10:04 silvanocerza

thank you for your interest!

  • I would like to provide users with new options (such as inference in Elasticsearch) without significantly breaking existing ones.
  • This is the current naming convention for Retrievers. We should discuss together what would be the best names for the new retrievers.

anakin87 avatar Apr 30 '24 12:04 anakin87

I agree that breaking changes should be avoided. We can attempt to integrate this into the existing document store. If it proves too hard without breakage we can add a new class (and deprecate the old one). What do you think?

Regarding naming, here are some proposals (I'm completely open to other names):

  • ElasticsearchBM25Retriever
  • ElasticsearchDenseEmbeddingRetriever
    • This would have a hybrid option. Alternatively we can add a ElasticsearchHybridRetriever.
  • ElasticsearchDenseExactEmbeddingRetriever (not convinced we need it but it is more efficient for <10k documents)
  • ElasticsearchSparseEmbeddingRetriever

maxjakob avatar Apr 30 '24 16:04 maxjakob

I'm going to work on the LangChain integration. It will become the reference implementation for this kind of integration with the package mentioned above. It would be fantastic if somebody from the community wants to give it a shot and integrate this into Haystack. That somebody would be invited to write a blog post for Elastic Search Labs to get some exposure for them and their Haystack use case in order to make a bit of a marketing noise, if they want to do this kind of thing.

maxjakob avatar Apr 30 '24 16:04 maxjakob

The mentioned LangChain reference implementation can be found here: https://github.com/langchain-ai/langchain-elastic/blob/66cf6f110dbfb2a89a1f92fbaa6488022275e17d/libs/elasticsearch/langchain_elasticsearch/vectorstores.py#L553

maxjakob avatar May 24 '24 09:05 maxjakob

I would like to take on this issue and contribute to the project. I have experience with Elasticsearch, vector databases, and generative AI, and I believe I can provide a solid implementation. I noticed that the issue is set to be closed on April 30th due to inactivity, but I’m interested in working on it.

Could you please assign it to me? I’ll start working on it right away.

nurssyhn avatar Mar 14 '25 19:03 nurssyhn

hey @nurssyhn thanks for taking this on! Please feel free to work on it and link your PR to this once it's ready. We avoid assigning community members to issues but we'll make sure to check in with you.

sjrl avatar Mar 19 '25 09:03 sjrl