FastChat
FastChat copied to clipboard
How can we make Vicuna support plugins?
Maybe we can make it be able to use standard plugins. How difficult would it be? Thanks so much for your work.
I am interested in figuring this out too. Currently, chat models are pretty bad with following instructions such as "follow the context below for answering" or "answering only according to the provided context" so people use instruction-tuned LLM instead of dialogue-tuned LLM (all about the dataset composition). I believe ChatGPT themselves are still figuring this out, it is not easy to connect the chat model to follow along with a source information/tool (vectorstores/plugins); can't seem to find much resources in this direction too~
I think it's highly likely there are multiple intermediate chains of LLM (gpt3/4) for different intermediary tasks before feeding the final output to the chat model (ChatGPT) to provide a comprehensive reply based off all the selected plugins and the given user query.
@timothylimyl How would you advise me on which instruction-tuned LLM model to choose
We will figure this out when the time comes. Vicuna with online access would be cool and a memory plugin.
@Jgoodwin64 @timothylimyl @valdesguefa @merrymercy
Here you go ;) some fun hours ahead...
Ping me if you have questions! And help me spread the news if you have fun with it. https://twitter.com/XReyRobert/status/1655603305450463234?s=20