langchain
langchain copied to clipboard
community: Use official ollama python library for embedding
Thank you for contributing to LangChain!
-
[x] PR title: "community: Use official ollama python library for embedding"
-
Description: This PR updates the
OllamaEmbeddingsclass in thecommunitypackage to use the officialollamaPython library for generating embeddings. The changes simplify the code by removing the need to manually make HTTP requests and handle responses. Theollamalibrary is now used to generate embeddings, which simplifies the code and makes it more maintainable. Theollamalibrary has been added as a dependency in thepyproject.tomlfile. -
Issue: Not applicable
-
Dependencies: This change introduces a new dependency on the
ollamalibrary -
Twitter handle: Not applicable for this change
-
[x] Add tests and docs: N/A
-
[x] Lint and test: Run
make format,make lintandmake testfrom the root of the package(s) you've modified. See contribution guidelines for more: https://python.langchain.com/docs/contributing/
Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in langchain.
If no one reviews your PR within a few days, please @-mention one of baskaryan, efriis, eyurtsev, hwchase17.
The latest updates on your projects. Learn more about Vercel for Git ↗︎
1 Ignored Deployment
| Name | Status | Preview | Comments | Updated (UTC) |
|---|---|---|---|---|
| langchain | ⬜️ Ignored (Inspect) | Visit Preview | May 14, 2024 9:40am |
@JokeJason, your solution is a game-changer for embeddings! And I have same issue but with OLLAMA LLM code, think yours solution could work there too. Would you be open to explore this question?
@JokeJason, your solution is a game-changer for embeddings! And I have same issue but with OLLAMA LLM code, think yours solution could work there too. Would you be open to explore this question?
Yes, I was expecting the same thing happen on OLLAMA Chat part, as the official python library is much more capable. maybe I further increase the scope of this PR?
Yes, I was expecting the same thing happen on OLLAMA Chat part, as the official python library is much more capable. maybe I further increase the scope of this PR?
Please do, if you have spare time for this PR.
for those interested, in https://github.com/MaxiBoether/langchain-ollama-package I have prototyped using the ollama package also for e.g. the chat model, which massively speeds up the inference. I need to clean this a bit (tbh, Claude Opus did most of the work :D), might submit a PR later on.
Yes, I was expecting the same thing happen on OLLAMA Chat part, as the official python library is much more capable. maybe I further increase the scope of this PR?
Please do, if you have spare time for this PR.
Hi, @nikzasel , looks like @MaxiBoether has implemented something for the chat and llm. I think I just need to sit and wait :)
Closing as I believe this is now implemented in the langchain-ollama package, but let me know if I'm mistaken.