Open-Assistant
Open-Assistant copied to clipboard
Allow multiple plugins at the same time
Changed many places that use: used_plugin to used_plugins PluginUsed to PluginsUsed
- [ ] Make frontend work with multiple
- [ ] Make DB update correctly
:x: pre-commit failed.
Please run pre-commit run --all-files
locally and commit the changes.
Find more information in the repository's CONTRIBUTING.md
Some discussion probably should be had on whether or nor plugins should be able to talk to each other, as I could see this being used in nefarious ways, eg a plugin that asks you to give it private information (from database or other private api).
Frontend may be working, but I can't check without fixing the database, which i dont know how to do.
Frontend may be working, but I can't check without fixing the database, which i dont know how to do.
Re fixing the database you should start fresh (delete your OA Docker volumes/images), run the backend, and then generate an Alembic script. Then when you restart it should work fine.
Some of the changes were due to getting mypy to not show any errors in the project. I am used to using typed languages and wanted to not have errors relating to types/None
I reset the database but it is still saying that it cannot find the "usedPlugins" column.
:x: pre-commit failed.
Please run pre-commit run --all-files
locally and commit the changes.
Find more information in the repository's CONTRIBUTING.md
This PR tries to implement
- The feature that will enable that user's query could trigger the usage of multiple plugins(all enabled plugins are presented to the LLM) for a single answer? or
- Just to give LLM multiple plugins(all enabled plugins are presented to the LLM) to choose from, and it will be using only one(that LLM chose) to generate the response(final prompt)?
I am asking this because there are at least two(obvious) options that I mentioned ^^^, and I think that presenting multiple plugins to the LLM and letting it choose one, is more feasible(usable/reliable) than letting it choose and use multiple plugins, and also frontend, in that case, could be almost unchanged.
And we should also keep in mind that for both options we would need a much larger ctx window than what we currently have.
@draganjovanovich This does the first, as well as showing all responses it received at the end, allowing it to properly use the output of multiple plugins as the final output.
Ok, you are aware, that final response is generated outside of the plugin system, so there must me one final prompt? At least for the curent state of inference worker.
@draganjovanovich What I mean is, that instead of just returning the last observation as the output to the final prompt, it returns all the observations appended to each other.