langchain icon indicating copy to clipboard operation
langchain copied to clipboard

Add integration for LocalAI

Open mudler opened this issue 2 years ago • 1 comments

Feature request

Integration with LocalAI and with its extended endpoints to download models from the gallery.

Motivation

LocalAI is a self-hosted OpenAI drop-in replacement with support for multiple model families: https://github.com/go-skynet/LocalAI

Your contribution

Not a python guru, so might take few cycles away here.

mudler avatar May 25 '23 16:05 mudler

Hey I would like to take upon this task and would be willing to contribute can this be assigned to me regards

pratham-saraf avatar May 26 '23 14:05 pratham-saraf

Hi, @mudler. I'm Dosu, and I'm helping the LangChain team manage their backlog. I wanted to let you know that we are marking this issue as stale.

From what I understand, you opened this issue as a feature request to add integration with LocalAI. You mentioned that you are not very experienced with Python and may need assistance with the implementation. Pratham-saraf has expressed interest in taking on the task and contributing to the project.

Before we close this issue, we wanted to check if it is still relevant to the latest version of the LangChain repository. If it is, please let the LangChain team know by commenting on the issue. Otherwise, feel free to close the issue yourself, or it will be automatically closed in 7 days.

Thank you for your understanding and contribution to the LangChain project. If you have any further questions or need assistance, please let us know.

dosubot[bot] avatar Sep 11 '23 16:09 dosubot[bot]

I would highly appreciate this issue being picked up again!

The issue with the current state of affairs is that AzureChatOpenAI and ChatOpenAI both send requests to /v1/openai/deployments/MODEL_NAME/chat/completions but LocalAI expects completion requests to hit /v1/chat/completions.

l4b4r4b4b4 avatar Dec 05 '23 12:12 l4b4r4b4b4

I would highly appreciate this issue being picked up again!

The issue with the current state of affairs is that AzureChatOpenAI and ChatOpenAI both send requests to /v1/openai/deployments/MODEL_NAME/chat/completions but LocalAI expects completion requests to hit /v1/chat/completions.

We have the same issue... is there some workaround to point it at LocalAI w/o too many code changes?

benm5678 avatar Jan 03 '24 18:01 benm5678

I would highly appreciate this issue being picked up again! The issue with the current state of affairs is that AzureChatOpenAI and ChatOpenAI both send requests to /v1/openai/deployments/MODEL_NAME/chat/completions but LocalAI expects completion requests to hit /v1/chat/completions.

We have the same issue... is there some workaround to point it at LocalAI w/o too many code changes?

use another backend :sweat_smile:

l4b4r4b4b4 avatar Jan 05 '24 13:01 l4b4r4b4b4

I would highly appreciate this issue being picked up again!

The issue with the current state of affairs is that AzureChatOpenAI and ChatOpenAI both send requests to /v1/openai/deployments/MODEL_NAME/chat/completions but LocalAI expects completion requests to hit /v1/chat/completions.

I'm wondering what openai client version are you using? /v1/chat/completions is actually what is used currently https://platform.openai.com/docs/api-reference/chat/create

mudler avatar Jan 05 '24 14:01 mudler

Hi @baskaryan, could you please assist with this issue? The user has provided additional context regarding the discrepancy in request endpoints between AzureChatOpenAI, ChatOpenAI, and LocalAI. Thank you!

dosubot[bot] avatar Jan 05 '24 14:01 dosubot[bot]