text-generation-webui icon indicating copy to clipboard operation
text-generation-webui copied to clipboard

Integrate with gpt-index/langchain

Open dacamp opened this issue 1 year ago • 29 comments

Description

Integrate the UI with gpt-index (llama-index) or langchain to greatly extend features

Additional Context https://github.com/jerryjliu/llama_index https://github.com/hwchase17/langchain

dacamp avatar Mar 30 '23 19:03 dacamp

I'm kinda attempting to do that, but it is being quite difficult to prompt it so it properly follows langchain agent formatting. It simply does not want to follow it. Have not tested with alpaca due to some Lora13B issues I'm experiencing. If you want to test it:

https://github.com/seijihariki/text-generation-webui/tree/langchain

Just run it with --agent. Params tab is doing nothing for now on the agent. (it is pretty raw).

If someone with more experience prompting for this objective, I would love some hints. This is my current prompt, based on the default langchain conversational agent and some example data used for completion:

Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.

Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.

Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.

TOOLS:
------

Assistant has access to the following tools:

> Python REPL: A Python shell. Use this to execute python commands. Input should be a valid python command. If you want to see the output of a value, you should print it out with `print(...)`.
> requests_get: A portal to the internet. Use this when you need to get specific content from a website. Input should be a  url (i.e. https://www.google.com). The output will be the text response of the GET request.
> Calculator: Useful for when you need to answer questions about math.
> Wikipedia: A wrapper around Wikipedia. Useful for when you need to answer general questions about people, places, companies, historical events, or other subjects. Input should be a search query.

To use a tool, please use the following format:

\`\`\`
Thought: Do I need to use a tool? Yes
Why: reason you need to use a tool.
Action: the action to take, should be one of [Python REPL, requests_get, Calculator, Wikipedia]
Action Input: the input to the action
Observation: the result of the action
\`\`\`

When you have a response to say to the Human, or if you do not need to use a tool, you MUST use the format:

\`\`\`
Thought: Do I need to use a tool? No
AI: [your response here]
\`\`\`

Begin! Only use tools if information is missing.

Scratchpad:

<scratchpad data example>
Thought: Do I need to use a tool? Yes
Action: Python REPL
Action Input: import socket;socket.gethostname()
Observation: 'my-pc-linux'
</scratchpad data example>

Previous conversation history:

Human: Hello!
AI: Hello!</end>
Human: Are you fine? 
AI: I am fine.</end>
Human: Can you help me with something?
AI: What can I help you with?</end>

New Input: What is my hostname?

Expected continuation:

Thought: Do I need to use a tool? No
AI: Your hostname is my-pc-linux.</end>

I'm using </end> as an end detection token.

When (and if) I get langchain working I will integrate it with llama-index as well.

seijihariki avatar Mar 30 '23 23:03 seijihariki

I'll try this weekend, I've got some ideas for the prompt already. My use case is reading files from disk, so I may fork your branch and expand on that as well. Thank you!

dacamp avatar Mar 31 '23 19:03 dacamp

What model are you using?

dacamp avatar Mar 31 '23 19:03 dacamp

I'll try this weekend, I've got some ideas for the prompt already. My use case is reading files from disk, so I may fork your branch and expand on that as well. Thank you!

That would be great!

What model are you using?

For now only the base llama-13b-4bit-128. I finally managed to get a native alpaca-13b-4bit-128g to run here, so I will be testing this a bit.

I think I may end up just writing some code to use the webui api instead, but feel free to tap into the generation code using my current generic langchain LLM implementation.

seijihariki avatar Apr 01 '23 01:04 seijihariki

Managed to make it work. Sent some fixes to the langchain LLM.

image

image

Issue is I kind of need to tell it to use a tool. If not, it will just hallucinate information instead of trying to use a tool:

image

image

It is also very bad at deciding exactly which tool to use:

image

image

Oh, the inverted names are just a display thing. The actual langchain agent does not have that problem. Just sent a fix to my branch.

I suspect we will have to appropriately finetune this on langchain agent formatted data. Vicuna 13B seems promising, but as we don't have the weights, we can only speculate.

seijihariki avatar Apr 01 '23 03:04 seijihariki

Id be interested to see if this https://huggingface.co/chavinlo/toolpaca works well with the agent tooling prompts on langchain. Im not sure though, was trying to read up on toolformer papers last night but they went zoom zoom over my head.

I may try your setup later today to see if i can play around with the prompting.

Just for info, its both this langchain branch on this repo, the - -agent flag, and can you link to the langchain branch you were using (or do you use main?)

Wingie avatar Apr 01 '23 09:04 Wingie

Id be interested to see if this https://huggingface.co/chavinlo/toolpaca works well with the agent tooling prompts on langchain. Im not sure though, was trying to read up on toolformer papers last night but they went zoom zoom over my head.

Seems cool but a very big model for me to test locally haha

I may try your setup later today to see if i can play around with the prompting.

Oh, that would be great!

Just for info, its both this langchain branch on this repo, the - -agent flag, and can you link to the langchain branch you were using (or do you use main?)

It's my own fork which has a "langchain" branch. For the langchain version, I just use the default pip one.

https://github.com/seijihariki/text-generation-webui/tree/langchain

You can run it with --agent or just switch environments via the settings page. I have not modified requirements.txt for all necessary packages, so you may need to install some on the fly (such as "Wikipedia" and "python-dotenv")

seijihariki avatar Apr 01 '23 11:04 seijihariki

so I did manage to do some testing with python server.py --model chavinlo_toolpaca --listen --load-in-8bit --agent here is the logs - https://pastebin.com/nQC1ptxB 1 - it did manage to end up in some infinite loops and tried to run a totally non terminal command in terminal (lol) 2 - got this error when trying to ask for the weather of amsterdam File "/home/ubuntu/venv/lib/python3.9/site-packages/langchain/agents/conversational/base.py", line 84, in _extract_tool_and_input raise ValueError(f"Could not parse LLM output: {llm_output}") ValueError: Could not parse LLM output: `

image

Wingie avatar Apr 02 '23 19:04 Wingie

@Wingie toolpaca is finetuned around toolformer prompts which are completely different in both syntax and operation from what langchain provides. see https://github.com/lucidrains/toolformer-pytorch/blob/main/toolformer_pytorch/prompts.py vs https://github.com/hwchase17/langchain/blob/master/langchain/agents/conversational/prompt.py

the proper way would be to implement a new toolformer agent with toolformer-alike prompts and execution

knoopx avatar Apr 06 '23 18:04 knoopx

yea hmm i see. yea though the langchain agent kinda expects the ReAct format.. i think the parsing and injection of content is more complicated in toolformer.. im going to try the 30b alpaca next to see if it fares better at following that pattern in langchain.

Wingie avatar Apr 07 '23 19:04 Wingie

@Wingie I have actually been testing with vicuna trying to replicate its alpaca-like syntax with a custom agent. I will push what I have here to my branch but you can try modifying my agent with the toolformer syntax to test it properly.

seijihariki avatar Apr 08 '23 22:04 seijihariki

I have pushed some modifications and a custom vicunaagent that I have been using to try to make the LLM to generate a valid JSON as response. It works, in general, but it has been hard to make it use the results of the executed command:

image

It keeps looping doing the same command and I do not know why. The JSON thing works wonders, though. You should be able to modify the VicunaAgent on https://github.com/seijihariki/text-generation-webui/blob/langchain/modules/lcagent/vicunaagent.py to parse the ToolFormer format properly.

seijihariki avatar Apr 08 '23 23:04 seijihariki

will try def tmrw, was investigating the looping thing - can you try passing max_iterations=2, early_stopping_method="generate") to the initialize_agent?

regarding the looping behaviour, its a thing in langchain agent code.. it really wants the agent to follow the format to end the action loop-

Thought: Do i need to use a tool? No
Response: blah blah

the thought that "i should use the hostname command" is the wrong format for langchain react agents - there are other agents . https://python.langchain.com/en/latest/modules/agents/agents/agent_types.html conversational-react-agent really wants like

Thought: Should i use a tool yes?
Action: Terminal(hostname)
Observation : <result here>
Agent Response: blah blah

but great you have made a vicuna agent, that should give us a starting point to start to override this behaviour. :D

Wingie avatar Apr 09 '23 17:04 Wingie

One thing I keep thinking back to - as soon as it can run code or a repl arbitrarily, locally, there's potential for problems down the line.

Anyone found any repos that can isolate some of these? i've been all for bringing langchain into the mix, but this part keeps gnawing at me.

fblissjr avatar Apr 12 '23 14:04 fblissjr

Yea i was thinking about it too but there are so many projects that do something similar. The agent framework is really powerful to be able to do more complex goals. Its quite easy to enforce a white list of terminal commands that the agent is allowed to execute but that does make it limiting in some aspects. The good news is that im finding that the 30b alpaca lora is the smallest model that can keep to the langchain prompt.

-- more projects we should be concerned about but are fun to play with -- https://github.com/mpaepper/llm_agents/ https://github.com/virtualzx-nad/easy_llm_agents https://github.com/pHaeusler/micro-agent https://github.com/yoheinakajima/babyagi

Wingie avatar Apr 12 '23 19:04 Wingie

I mean, we could make it run things in a docker container for some kind of sandboxing to be safe. Almost sure that for now no LLM would break out of the container on accident. For some tasks though we may want the agent to interact with the system in some way. In that case maybe using a second "instance" of the LLM to detect dangerous commands could work?

seijihariki avatar Apr 13 '23 23:04 seijihariki

I mean, we could make it run things in a docker container for some kind of sandboxing to be safe. Almost sure that for now no LLM would break out of the container on accident. For some tasks though we may want the agent to interact with the system in some way. In that case maybe using a second "instance" of the LLM to detect dangerous commands could work?

After reading this and thinking more, is it probably goes outside the scope of this project right now to try and do this. Would make for a great independent project to try and set standards around, though, if it doesn't exist already. :)

fblissjr avatar Apr 13 '23 23:04 fblissjr

Im not so concerned about the llm breaking out to be honest. Well maybe the llama.cpp ones.. But the larger ones that need a Gpu i think will try their best to stay on our good side. XD On a serious note, I think that honestly nobody thinking about safety.. Im on a trip now but when im back i really want to see if this https://github.com/DataBassGit/BabyBoogaAGI works as advertised. They dont use langchain, just use prompting to create new agents. I think there can be a new agent type called "overseer" to review any commands before they get execute.

I like the idea of docker container tho. I think tgwui should come with it in default mode.

Wingie avatar Apr 14 '23 09:04 Wingie

@Wingie didn't know about baby booga - will test it out after work. :)

fblissjr avatar Apr 14 '23 17:04 fblissjr

No luck getting it to generate anything meaningful - just empty data responses. Probably need to spend more time with it than I did.

fblissjr avatar Apr 14 '23 18:04 fblissjr

more testing results! it works quite well actually the vicuna agent (event when not using vicuna) very easy to customize the prompts - i trained a small lora with the react pattern and even so it fails with some llm parsing errors on the default langchain agent but the vicuna agent seems to have a higher success rate especially with wikipedia parsing!

it does fail while trying to sqrt in the calculator, :D image

Wingie avatar Apr 15 '23 05:04 Wingie

nice! able to share the lora or prompts by chance?

fblissjr avatar Apr 15 '23 19:04 fblissjr

Yea https://huggingface.co/Wingie/lora_tbyw_v6/blob/main/datasets/react.txt this is what i trained the lora with on the training tab here

Wingie avatar Apr 16 '23 03:04 Wingie

Have been doing some testing by myself here, and it seems I managed to find out some things. Firstly, it tends to ignore information not inside the conversation, so the scratchpad thing langchain does, does not work well with vicuna-like models. Second, as a lot of the conversation format as a whole actually is repeating, to avoid it inventing new tools for every single query I had to disable completely the repetition penalty.

Also, keeping context from previous agent thoughts seems to help a lot it to follow the format, so the langchain history format seems to not be optimal for these models.

Simple attempts made on the notebook mode:

Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.

Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.

Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.

TOOLS:
------

Assistant has access ONLY to the following tools:

> "python_repl": A Python shell. Use this to execute python commands. Input should be a valid python command. If you want to see the output of a value, you should print it out with `print(...)`.
> "calculator": Useful for when you need to answer questions about math.
> "wikipedia": A wrapper around Wikipedia. Useful for when you need to answer general questions about people, places, companies, historical events, or other subjects. Input should be a search query.
> "terminal": Runs a bash command in the terminal.
> "respond": Sends a message to the user.

All responses by Assistant are json containing a "thought" key, containing the assistant's thoughts; a "tool" key, for the tool to be used by Assistant; and a "value" key, to send to the tool.

EXAMPLES:

Human: Hello!
AI: {
"thought": "The user greeted me. I should respond."
"tool":"respond"
"value":"Hello!"
}
Human: Are you fine? 
AI: {
"thought": "I am functioning properly."
"tool":"respond"
"value":"I am doing fine. How about you?"
}

CONVERSATION:

### Human
Hello
### Assistant
{
"thought": "The user greeted me.",
"tool": "respond",
"value": "Hello!"
}
### Human
Can you help me with something?
### Assistant
{
"thought": "The user is asking for assistance.",
"tool": "respond",
"value": "Of course! What can I help you with?"
}
### Human
I want ot calculate 70 factorial
### Assistant
{
"thought": "The user wants to calculate 70 factorial.",
"tool": "calculator",
"value": "70!"
}
### Calculator Tool Result
[RESULT]
### Assistant
{
"thought": "The user wants to calculate 70 factorial. The result is 1.197857e+100.",
"tool": "respond",
"value": "The result is 1.197857e+100."
}
### Human
Okay! Who was the architect for the eiffel tower?
### Assistant
{
"thought": "The user wants to know who designed the Eiffel Tower.",
"tool": "wikipedia",
"value": "Gustave Eiffel"
}
### Wikipedia Tool Result
[RESULT]
### Assistant
{
"thought": "Alexandre Gustave Eiffel designed the Eiffel Tower.",
"tool": "respond",
"value": "Alexandre Gustave Eiffel designed the Eiffel Tower."
}
### Human
Thanks, can you get the current machine hostname please?
### Assistant
{
"thought": "The user wants to know the hostname of the current machine.",
"tool": "terminal",
"value": "hostname"
}
### Terminal Tool Result
seiji-pc-linux
### Assistant
{
"thought": "The hostname of the current machine is seiji-pc-linux.",
"tool": "respond",
"value": "seiji-pc-linux"
}

Tool responses are replaced by [RESULT] after any assistant use of the "respond" tool to save tokens. This way of doing things deviates so much from the original langchain format that I would probably have to write a whole new memory (sliding window history) module and an entirely different base agent (not only implement the usual custom langchain agent).

As that seems a bit unreasonable (trying to fit a circle in a square kind of unreasonable), I will try to implement this logic in an extension to the webui for now, and will not be using langchain, which may limit a bit the tools we can use, but at least adds some agent functionality as a proof of concept.

We can try implementing a more compatible format with langchain as well, but to be honest I just want my vicuna to be able to access tools and the internet for now!

I still think that if we collect a reasonable amount of data of langchain prompts conversing with OpenAI models using the current vicuna agent, we can finetune a model to understand a more langchain-like format and be a good langchain agent, but until we do that, the current models don't seem to grasp the expected langchain syntax at all.

[Edit] Interestingly, I think that if we chat with open models in this format, we could just parse it and transform it to the langchain format for training, no need for any chatgpt usage, which could be good as we would be avoiding openai's tos.

seijihariki avatar Apr 22 '23 04:04 seijihariki

Extension complex_memory, long_term_memory are great but having langchain of self ask prompt "Update the summarize with of all conversation with all important point under 50 Characters." for a dynamic memory would be top!

PsychoSmiley avatar Apr 22 '23 14:04 PsychoSmiley

Seems like vicuna tries to add actual newlines inside json objects, and that breaks json parsing.

Prompts in this format (langchain-like but with history and more uncommon markers) seem to work better:

Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.

Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.

Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.

TOOLS:
------

Assistant has access ONLY to the following tools:

> "calculator": "input" will run as a python expression. You should use this tool to perform any arithmetic operations
> "respond": Sends a message to the user.

All responses by Assistant are in a special format containing a "thought" key, containing the internal thoughs of Suri; a "tool" key, for the tool to be used by Assistant; and a "input" key, to send to the tool.

### EXAMPLES:

### Human
Hello!
### Assistant
#> thought: The user greeted me. I should respond.
#> tool: respond
#> input: Hello!
### Human
Are you fine?
### Assistant
#> thought: I am functioning properly.
#> tool: respond
#> input: I am doing fine. How about you?
### Human
I'm doing fine too. Write me 5 random words please?
### Assistant
#> thought: Let me write some random words.
#> tool: respond
#> input: I'm glad you are doing good. Here is your list:
 - Scintillating
 - Simple
 - Serendipity
 - Bicycle
 - Friend
### Human
Thanks.
### Assistant
#> thought: The user thanked me.
#> tool: respond
#> input: I'm always ready to help.
### Human
How much is the square root of 5?
### Assistant
#> thought: I should use the calculator tool
#> tool: calculator
#> input: math.sqrt(5)

### CONVERSATION:

### SYSTEM[CHAT]
Chat Created
### Human[8dc69b28-ad19-4962-b097-18692883f8b8]
hello
### Assistant
#> thought: The user greeted me. I should respond.
#> tool: respond
#> input: Hello!
### Human[8dc69b28-ad19-4962-b097-18692883f8b8]
can you help me with something?
### Assistant
#> thought: The user asked for help. I should use the "respond" tool to send a message.
#> tool: respond
#> input: Of course! What do you need help with?
### Human[8dc69b28-ad19-4962-b097-18692883f8b8]
how much is the natural log of 70?
### Assistant
#> thought: I should use the calculator tool to perform the calculation.
#> tool: calculator
#> input: math.log(70)
### TOOL[calculator]
Result: 4.248495242049359
### Assistant
#>

https://user-images.githubusercontent.com/7552751/234047106-cb773319-c8b0-4911-aa75-7ff445262f9e.mp4

The above is a project I'm working on, will probably open source it soon, but it is using the prompt format above.

seijihariki avatar Apr 24 '23 15:04 seijihariki

Seems like vicuna tries to add actual newlines inside json objects, and that breaks json parsing.

Prompts in this format (langchain-like but with history and more uncommon markers) seem to work better:

Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.

Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.

Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.

TOOLS:
------

Assistant has access ONLY to the following tools:

> "calculator": "input" will run as a python expression. You should use this tool to perform any arithmetic operations
> "respond": Sends a message to the user.

All responses by Assistant are in a special format containing a "thought" key, containing the internal thoughs of Suri; a "tool" key, for the tool to be used by Assistant; and a "input" key, to send to the tool.

### EXAMPLES:

### Human
Hello!
### Assistant
#> thought: The user greeted me. I should respond.
#> tool: respond
#> input: Hello!
### Human
Are you fine?
### Assistant
#> thought: I am functioning properly.
#> tool: respond
#> input: I am doing fine. How about you?
### Human
I'm doing fine too. Write me 5 random words please?
### Assistant
#> thought: Let me write some random words.
#> tool: respond
#> input: I'm glad you are doing good. Here is your list:
 - Scintillating
 - Simple
 - Serendipity
 - Bicycle
 - Friend
### Human
Thanks.
### Assistant
#> thought: The user thanked me.
#> tool: respond
#> input: I'm always ready to help.
### Human
How much is the square root of 5?
### Assistant
#> thought: I should use the calculator tool
#> tool: calculator
#> input: math.sqrt(5)

### CONVERSATION:

### SYSTEM[CHAT]
Chat Created
### Human[8dc69b28-ad19-4962-b097-18692883f8b8]
hello
### Assistant
#> thought: The user greeted me. I should respond.
#> tool: respond
#> input: Hello!
### Human[8dc69b28-ad19-4962-b097-18692883f8b8]
can you help me with something?
### Assistant
#> thought: The user asked for help. I should use the "respond" tool to send a message.
#> tool: respond
#> input: Of course! What do you need help with?
### Human[8dc69b28-ad19-4962-b097-18692883f8b8]
how much is the natural log of 70?
### Assistant
#> thought: I should use the calculator tool to perform the calculation.
#> tool: calculator
#> input: math.log(70)
### TOOL[calculator]
Result: 4.248495242049359
### Assistant
#>

Peek.2023-04-24.12-36.mp4 The above is a project I'm working on, will probably open source it soon, but it is using the prompt format above.

thats amazing! can it also embed a pdf?

rosx27 avatar May 03 '23 11:05 rosx27

Which model (or model+lora) are you using for this?

fblissjr avatar May 03 '23 12:05 fblissjr

Which model (or model+lora) are you using for this?

It's vicuna 13b, and I have tested a bit of stable-vicuna 13b. For now, other models I tried using can't easily follow the current format of my prompts (Have not tried WizardLM, but it seems more of a question-answering model instead of a chatbot model). Maybe changing the format could help with that, but I believe a LoRA trained for the format would be optimal, and for that, we need high-quality tool usage data.

I have tried to add some more complex tools, but then vicuna-based models start to struggle a bit too (the tools I tried adding are reminder and calendar access tools). I am currently collecting some data (using the assistant and editing the assistant response when not optimal) to attempt to train a LoRA for more complex commands, but I'm uncertain if this will work right. If I am able to collect an okay dataset, I may try to train a LoRA on the LangChain agent format as well, but collecting this data is being quite a challenge, considering I have not implemented anything in the actual interface to allow for assistant response modification. This is what I'm trying to implement currently, to allow for one to easily correct assistant queries to tools and its thoughts to use the conversation as better training data. For now, I'm just correcting assistant thoughts and tool queries manually in the DB, which is very suboptimal.

My main goal with this project is to provide the assistant with some proactive agency, which is what I find lacking from current chatbots, that are basically passive (don't do anything until the user sends something) or way too active (run in a loop with a goal). I currently only have it integrated with a single “sensor” (what I call data sources that send data to the assistant to process proactively), which listens for new emails in my Gmail inbox, summarizes the email if it is relevant, and sends a message to the default chat. Some other examples of sensors could be news data, or calendar planning events.

I think I may just add some method of allowing people to export their chat conversations for training data, so some people may try using it to collect some data. But as OpenAssistant seems to be introducing plugins soon, they should be way more qualified than me to collect any kind of tool usage data for future training. Still, I will implement this functionality, more because it's a project I want to personally work on.

seijihariki avatar May 04 '23 01:05 seijihariki

This issue has been closed due to inactivity for 6 weeks. If you believe it is still relevant, please leave a comment below. You can tag a developer in your comment.

github-actions[bot] avatar Nov 24 '23 23:11 github-actions[bot]