langchain icon indicating copy to clipboard operation
langchain copied to clipboard

community: Use official ollama python library for embedding

Open JokeJason opened this issue 1 year ago • 2 comments
trafficstars

Thank you for contributing to LangChain!

  • [x] PR title: "community: Use official ollama python library for embedding"

  • Description: This PR updates the OllamaEmbeddings class in the community package to use the official ollama Python library for generating embeddings. The changes simplify the code by removing the need to manually make HTTP requests and handle responses. The ollama library is now used to generate embeddings, which simplifies the code and makes it more maintainable. The ollama library has been added as a dependency in the pyproject.toml file.

  • Issue: Not applicable

  • Dependencies: This change introduces a new dependency on the ollama library

  • Twitter handle: Not applicable for this change

  • [x] Add tests and docs: N/A

  • [x] Lint and test: Run make format, make lint and make test from the root of the package(s) you've modified. See contribution guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:

  • Make sure optional dependencies are imported within a function.
  • Please do not add dependencies to pyproject.toml files (even optional ones) unless they are required for unit tests.
  • Most PRs should not touch more than one package.
  • Changes should be backwards compatible.
  • If you are adding something to community, do not re-import it in langchain.

If no one reviews your PR within a few days, please @-mention one of baskaryan, efriis, eyurtsev, hwchase17.

JokeJason avatar Apr 15 '24 02:04 JokeJason

The latest updates on your projects. Learn more about Vercel for Git ↗︎

1 Ignored Deployment
Name Status Preview Comments Updated (UTC)
langchain ⬜️ Ignored (Inspect) Visit Preview May 14, 2024 9:40am

vercel[bot] avatar Apr 15 '24 02:04 vercel[bot]

@JokeJason, your solution is a game-changer for embeddings! And I have same issue but with OLLAMA LLM code, think yours solution could work there too. Would you be open to explore this question?

nikzasel avatar Apr 20 '24 23:04 nikzasel

@JokeJason, your solution is a game-changer for embeddings! And I have same issue but with OLLAMA LLM code, think yours solution could work there too. Would you be open to explore this question?

Yes, I was expecting the same thing happen on OLLAMA Chat part, as the official python library is much more capable. maybe I further increase the scope of this PR?

JokeJason avatar Apr 21 '24 12:04 JokeJason

Yes, I was expecting the same thing happen on OLLAMA Chat part, as the official python library is much more capable. maybe I further increase the scope of this PR?

Please do, if you have spare time for this PR.

nikzasel avatar Apr 22 '24 22:04 nikzasel

for those interested, in https://github.com/MaxiBoether/langchain-ollama-package I have prototyped using the ollama package also for e.g. the chat model, which massively speeds up the inference. I need to clean this a bit (tbh, Claude Opus did most of the work :D), might submit a PR later on.

MaxiBoether avatar May 14 '24 09:05 MaxiBoether

Yes, I was expecting the same thing happen on OLLAMA Chat part, as the official python library is much more capable. maybe I further increase the scope of this PR?

Please do, if you have spare time for this PR.

Hi, @nikzasel , looks like @MaxiBoether has implemented something for the chat and llm. I think I just need to sit and wait :)

JokeJason avatar May 14 '24 09:05 JokeJason

Closing as I believe this is now implemented in the langchain-ollama package, but let me know if I'm mistaken.

ccurme avatar Aug 14 '24 18:08 ccurme