FastChat
FastChat copied to clipboard
Langchain Support
I'm not sure if langchain support is already possible with this model, but if it isn't, I would like to request that it be implemented. If it is already possible, I would like to request that documentation be added to explaining how to use it in combination with langchain.
Using langchain and llama-Index with Vicuna would be a great option for many solutions that require a lot of context and are therefor to expensive to use with an LLM API like openai.
Thank you for open sourcing such a great model.
They seem to be doing just fine without Lang.
@dondre Well langchain opens up the support for large contexts. I don't understand your point, if I want a chatbot that uses my specific emails and notes to answer my questions, there is only the option of using langchain currenly
I misspoke earlier. There is zero evidence to support the claims being made in this empty repo. GPT4ALL has a branch for LangChain support, check them out.
On Sat, Apr 1, 2023 at 11:58 AM Phillip Rath @.***> wrote:
@dondre https://github.com/dondre Well langchain opens up the support for large contexts. I don't understand your point, if I want a chatbot that uses my specific emails and notes to answer my questions, there is only the option of using langchain currenly
— Reply to this email directly, view it on GitHub https://github.com/lm-sys/FastChat/issues/95#issuecomment-1493044724, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAYSHEQL4RBUEENQRZULSH3W7BNC5ANCNFSM6AAAAAAWOYPCDQ . You are receiving this because you were mentioned.Message ID: @.***>
+1 for this. Langchain will open a lot of new usecases on top of Vicuna.
we'll investigate LangChain integration and update in the thread
You can integrate to the Langchain Agent API yourself through a Custom LLM Agent, plus some custom inference code to get the right behavior. I've just implemented it here as an example: https://github.com/paolorechia/learn-langchain/tree/main
I don't think there are currently any endpoints out there exposing to retrieve Vicuna's embeddings to integrate with other use cases though.
Please monitor #381