FastChat icon indicating copy to clipboard operation
FastChat copied to clipboard

Langchain Support

Open PhillipRt opened this issue 2 years ago • 5 comments

I'm not sure if langchain support is already possible with this model, but if it isn't, I would like to request that it be implemented. If it is already possible, I would like to request that documentation be added to explaining how to use it in combination with langchain.

Using langchain and llama-Index with Vicuna would be a great option for many solutions that require a lot of context and are therefor to expensive to use with an LLM API like openai.

Thank you for open sourcing such a great model.

PhillipRt avatar Mar 31 '23 14:03 PhillipRt

They seem to be doing just fine without Lang.

dondre avatar Mar 31 '23 23:03 dondre

@dondre Well langchain opens up the support for large contexts. I don't understand your point, if I want a chatbot that uses my specific emails and notes to answer my questions, there is only the option of using langchain currenly

PhillipRt avatar Apr 01 '23 16:04 PhillipRt

I misspoke earlier. There is zero evidence to support the claims being made in this empty repo. GPT4ALL has a branch for LangChain support, check them out.

On Sat, Apr 1, 2023 at 11:58 AM Phillip Rath @.***> wrote:

@dondre https://github.com/dondre Well langchain opens up the support for large contexts. I don't understand your point, if I want a chatbot that uses my specific emails and notes to answer my questions, there is only the option of using langchain currenly

— Reply to this email directly, view it on GitHub https://github.com/lm-sys/FastChat/issues/95#issuecomment-1493044724, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAYSHEQL4RBUEENQRZULSH3W7BNC5ANCNFSM6AAAAAAWOYPCDQ . You are receiving this because you were mentioned.Message ID: @.***>

dondre avatar Apr 01 '23 17:04 dondre

+1 for this. Langchain will open a lot of new usecases on top of Vicuna.

takan1 avatar Apr 10 '23 23:04 takan1

we'll investigate LangChain integration and update in the thread

zhisbug avatar Apr 10 '23 23:04 zhisbug

You can integrate to the Langchain Agent API yourself through a Custom LLM Agent, plus some custom inference code to get the right behavior. I've just implemented it here as an example: https://github.com/paolorechia/learn-langchain/tree/main

I don't think there are currently any endpoints out there exposing to retrieve Vicuna's embeddings to integrate with other use cases though.

paolorechia avatar Apr 18 '23 21:04 paolorechia

Please monitor #381

zhisbug avatar Apr 21 '23 04:04 zhisbug