langchain
langchain copied to clipboard
Add integration for LocalAI
Feature request
Integration with LocalAI and with its extended endpoints to download models from the gallery.
Motivation
LocalAI is a self-hosted OpenAI drop-in replacement with support for multiple model families: https://github.com/go-skynet/LocalAI
Your contribution
Not a python guru, so might take few cycles away here.
Hey I would like to take upon this task and would be willing to contribute can this be assigned to me regards
Hi, @mudler. I'm Dosu, and I'm helping the LangChain team manage their backlog. I wanted to let you know that we are marking this issue as stale.
From what I understand, you opened this issue as a feature request to add integration with LocalAI. You mentioned that you are not very experienced with Python and may need assistance with the implementation. Pratham-saraf has expressed interest in taking on the task and contributing to the project.
Before we close this issue, we wanted to check if it is still relevant to the latest version of the LangChain repository. If it is, please let the LangChain team know by commenting on the issue. Otherwise, feel free to close the issue yourself, or it will be automatically closed in 7 days.
Thank you for your understanding and contribution to the LangChain project. If you have any further questions or need assistance, please let us know.
I would highly appreciate this issue being picked up again!
The issue with the current state of affairs is that AzureChatOpenAI and ChatOpenAI both send requests to /v1/openai/deployments/MODEL_NAME/chat/completions but LocalAI expects completion requests to hit /v1/chat/completions.
I would highly appreciate this issue being picked up again!
The issue with the current state of affairs is that AzureChatOpenAI and ChatOpenAI both send requests to
/v1/openai/deployments/MODEL_NAME/chat/completionsbut LocalAI expects completion requests to hit/v1/chat/completions.
We have the same issue... is there some workaround to point it at LocalAI w/o too many code changes?
I would highly appreciate this issue being picked up again! The issue with the current state of affairs is that AzureChatOpenAI and ChatOpenAI both send requests to
/v1/openai/deployments/MODEL_NAME/chat/completionsbut LocalAI expects completion requests to hit/v1/chat/completions.We have the same issue... is there some workaround to point it at LocalAI w/o too many code changes?
use another backend :sweat_smile:
I would highly appreciate this issue being picked up again!
The issue with the current state of affairs is that AzureChatOpenAI and ChatOpenAI both send requests to
/v1/openai/deployments/MODEL_NAME/chat/completionsbut LocalAI expects completion requests to hit/v1/chat/completions.
I'm wondering what openai client version are you using? /v1/chat/completions is actually what is used currently https://platform.openai.com/docs/api-reference/chat/create
Hi @baskaryan, could you please assist with this issue? The user has provided additional context regarding the discrepancy in request endpoints between AzureChatOpenAI, ChatOpenAI, and LocalAI. Thank you!