Add initial implementation for the Counsel of 9
Summary
Counsel of 9 is a new feature that provides diverse perspectives on user questions by simulating a council of 9 AI personas with distinct personalities and values.
How it works:
- User submits a question/prompt
- Nine AI personas (Pragmatist, Visionary, Skeptic, Optimist, Analyst, Creative, Ethicist, Realist, Mediator) each provide their opinion
- Each persona votes for the best opinion (excluding their own) based on their values
- The opinion with the most votes is returned as the 'winning' answer
This gives a foundation for the future of adding things like multi-llm planning mode
NOTE: this can become very expensive if you are using a model like Opus or Gpt-5-pro...
Type of Change
- [x] Feature
- [ ] Bug fix
- [ ] Refactor / Code quality
- [ ] Performance improvement
- [ ] Documentation
- [ ] Tests
- [ ] Security fix
- [ ] Build / Release
- [ ] Other (specify below)
Testing
Manual testing
Related Issues
Relates to #ISSUE_ID
Discussion: LINK (if any)
Screenshots/Demos (for UX changes)
Before:
After:
Alternate counsel members
Email:
An interesting approach! But feels tough to me to include in the core goose implementation given how many tokens it could use... What would you think about a version which implements the core approach in a set of MCP tools with an optional server people could turn on?
I did originally really want to make this a MCP with MCP-UI but I couldn’t find a reliable way to make calls to the different LLMs through a MCP… maybe I’m thinking about this architecture incorrectly, though as it’s new to me (happy to chat more on slack to get this to a better state). Because of voting and personas we have to spawn fresh conversations with the LLM providers, not just take the current chat and provide it as a tool call.
making it a MCP that could be toggled would be really ideal because I really wanted to have the ability to ask in chat what the best way to do something would be or to ask the council about something. The only reason I deviated from this was due to the above need of calling a LLM and goose core had that functionality built in where mcps don’t.
Because of the token use I did split it out of chat, which was a trade off I’d rather not make though
The only reason I deviated from this was due to the above need of calling a LLM and goose core had that functionality built in where mcps don’t.
@simonsickle I implemented support for MCP sampling last week where the MCP server can use goose's model connection. It should just work if you emit sampling/createMessage requests from the server
I think it would be great if we can turn this indeed into an MCP server and use the new sampling! going to close this for now