langchain
langchain copied to clipboard
feat: OpenLLM
🎉 OpenLLM 🤝 LangChain
OpenLLM is a new open platform for operating large language models(LLMs) in production. Serve, deploy, and monitor any LLMs with ease.
OpenLLM lets developers and researchers to easily run inference with any open-source LLMs, deploy to the cloud or on-premises, build powerful AI apps, and fine tune your own LLM (coming soon...)
It currently supports ChatGLM, Dolly-v2, Flan-T5, Falcon, Starcoder, and more to come. One can also easily start either a REST/gRPC server, which is powered by BentoML.
Now that's out of the way, lets dive in!
The current depenedencies for this integration: openllm
This integrations brings a OpenLLM llms to LangChain, that can be used for
both running LLMs locally as well as interacting with a remote OpenLLM server.
To quickly start a local LLM, simply do the following:
from langchain.llms import OpenLLM
llm = OpenLLM(model_name="dolly-v2", model_id='databricks/dolly-v2-7b', device_map='auto')
llm("What is the difference between a duck and a goose? And why there are so many Goose in Canada?")
langchain.llms.OpenLLM, as mentioned above also have the capabilities to
interact with remote OpenLLM Server. Given there is a running OpenLLM server at
http://44.23.123.1, you can do the following:
from langchain.llms import OpenLLM
llm = OpenLLM(server_url='http://44.23.123.1:3000', server_type='grpc')
llm("What is the difference between a duck and a goose? And why there are so many Goose in Canada?")
Features soon to be open-sourced:
- OpenAI compatible API, allowing users to easily use LangChain's OpenAI llm.
- SSE support for OpenLLM server, allowing users to stream inference results.
- Last but not least, easily fine-tune your own LLMs with
LLM.tuning()
Last but not least, I would love to hear feedback and response from the community about the project, and feel free to reach out to me via Twitter @aarnphm_. Feel free to join our Discord to get the latest updates and developments.
Signed-off-by: Aaron [email protected]
Before submitting
I have added tests for this integration.
Who can review?
Tag maintainers/contributors who might be interested:
cc @hwchase17 @agola11
The latest updates on your projects. Learn more about Vercel for Git ↗︎
1 Ignored Deployment
| Name | Status | Preview | Comments | Updated (UTC) |
|---|---|---|---|---|
| langchain | ⬜️ Ignored (Inspect) | Jun 22, 2023 3:26am |
a notebook example would be great! thannks
Hi @hwchase17, thanks for the feedback! We've added an example notebook and the PR is ready for another look 🙏
@dev2049 some conflicts with docs, gonna defer to you here
@aarnphm is attempting to deploy a commit to the LangChain Team on Vercel.
A member of the Team first needs to authorize it.
Hopefully I fixed the docs 😃
kindly cc @hwchase17 for another round
resolved master merge conflicts and fixed some lint issues here if you want to merge this into your branch https://github.com/hwchase17/langchain/compare/feat/openllm?expand=1
Hi @dev2049, Thanks for the help. I have updated accordingly
Hi @dev2049, I just want to confirm that you will take this from here?
yep, am about to land to master in #6578! sorry thought something was wrong with poetry.lock but actually looks like it was all good 👍
Sounds good. Thanks again for all the help!