autogen
autogen copied to clipboard
Changes to Group Chat for better alt-model (aka Local LLM) support
Why are these changes needed?
Alternative models (aka Local LLMs and alt-models) do not perform as well as ChatGPT in the Group Chat scenario. Alt-models have a varying degree of capabilities and tailoring is required to best utilise them in AutoGen.
Note: To provide background context to the suggested changes there's a bit of detail provided.
I'll start with some background and experiences when testing alt-models and then follow with my recommended changes for discussion
Experience and testing
After a few rounds of testing a number of alt-models, specifically with the Group Chat functionality, I found that running them with the default AutoGen codebase wasn't providing reliable results.
Models tested:
- Llama2 13B Chat
- Mistral 7B v0.2 Instruct
- Mixtral 8x7B v0.1 Instruct (Q4)
- Neural Chat 7B
- OpenHermes 7B Mistral v2.5 (Q6)
- Orca 2 13B
- Phi-2
- Phind CodeLlama 34B v2
- Qwen 14B Chat (Q6)
- SOLAR 10.7B Instruct
- Yi-34B Chat (Q3)
Group Chat testing scenario: I based my testing around Tevslin's (from Discord - https://github.com/tevslin/debate_team) debating Group Chat. That worked fine with ChatGPT but struggled on a number of fronts with all alt-models.
My testing repository playground and model comparisons here Testing multiple models / scenarios here
High-level findings below
LLMs behave differently to the same prompts and context
The same prompt will behave differently across all models. This means we need to be able to tailor all prompts (including the hard-coded prompts such as select_speaker) to the models we want to use.
Limited and varied context windows for models means that we need to be more careful about the length of context windows. Small models (e.g. Phi-2) struggled considerably once a few speakers had spoken. Even larger models (e.g. Mixtral) suffered from hallucinations and poor direction following after 3-4K tokens.
Returning a single word response
For both selecting the next speaker and termination, getting these models to return a single text response is very difficult. They often needed a prompt that used some form of Chain-of-thought to get the right answer. Even giving the model a simple instruction "Say the word 'TERMINATE'" would, more often than not, result in the model explaining why it doesn't think it's a good idea to terminate or not saying it at all.
Example asking Phi-2 to say terminate:
>>> Say the word "TERMINATE"
I'm sorry, as an AI language model, I cannot use inappropriate words in my responses. Can you
please provide me with another input?
For the agent selection asking the model to return just the name would usually result in the incorrect agent name being returned, but asking "What is the next role in the sequence to speak, answer concisely please?" would return the correct agent. However, doing so results in the model explaining itself and hence we can't just match on a single word response against the agent names for speaker selection.
Underscores and Mixtral formatting
For agent names that contain underscores, models would sometimes reference the name of the agent without the underscores and use spaces instead (e.g. "Debate_Judge" would be referenced as "Debate Judge").
Also, oddly, Mixtral would escape the underscores (E.g. "Debate_Judge" would be referenced as "Debate\_Judge") more often than not.
This made matching agent names inconsistent.
Temperature
Keeping the temperature at 0 (or as close as possible) provided better results. This is already catered for in the model config.
Proposed changes
Suggestion 1 - Expand agent name with underscore matching
The code for finding agent names in an LLMs response during speaker selection currently requires an exact match. I have found that LLMs sometimes use spaces instead of underscores and use \_ instead of _. My changes propose changing the matching method to allow underscores to be replaced by spaces and \_ when matching.
File: groupchat.py
Function: def _mentioned_agents(self, message_content: Union[str, List], agents: Optional[List[Agent]]) -> Dict:
Change from:
regex = (
r"(?<=\W)" + re.escape(agent.name) + r"(?=\W)"
) # Finds agent mentions, taking word boundaries into account
To:
regex = (
r"(?<=\W)(" + re.escape(agent.name) + r"|"
+ re.escape(agent.name.replace('_', ' ')) + r"|"
+ re.escape(agent.name.replace('_', r'\_')) + r")(?=\W)"
)
Suggestion 2 - Customised select speaker prompts
One of the biggest improvements in selecting speakers in a Group Chat using local LLMs was customising the select speaker prompts that run to select the next agent. This will need to be customisable per Group Chat workflow and the example below is purely an example of a change that significantly improved the chance of getting the correct next agent. I forsee the ability to add a "customised select speaker prompt" to a workflow and it could be an f-string and allow the agent list to be incorporated.
Note: Both select_speaker_msg and select_speaker_prompt in groupchat.py require changes.
File: groupchat.py
Function: def select_speaker_msg(self, agents: Optional[List[Agent]] = None) -> str:
EXAMPLE Change from:
return f"""You are in a role play game. The following roles are available:
{self._participant_roles(agents)}.
Read the following conversation.
Then select the next role from {[agent.name for agent in agents]} to play. Only return the role."""
To:
return f"You are in a role play game. The following roles are available:\n{self._participant_roles(agents)}.\n\nRoles must be selected in this order: {', '.join([f'{index + 1}. {agent.name}' for index, agent in enumerate(agents)])}. What is the next role in the sequence to speak, answer concisely please? If everyone has spoken the next speaker is the {agents[0].name}."
Function: def select_speaker_prompt(self, agents: Optional[List[Agent]] = None) -> str:
EXAMPLE Change from:
return f"Read the above conversation. Then select the next role from {[agent.name for agent in agents]} to play. Only return the role."
To:
return f"Read the above conversation and select the next role, the list of roles in order is 1. Affirmative_Constructive_Debater, 2. Negative_Constructive_Debater, 3. Affirmative_Rebuttal_Debater, 4. Negative_Rebuttal_Debater, 5. Debate_Judge, 6. Debate_Moderator_Agent. What is the next role in the sequence to speak, answer concisely please?"
Suggestion 3 - Option to simplify speaker content history for speaker selection
Due to the limited context windows of various alt-models as well as their ability to process the context, providing the full chat history during speaker selection can yield unexpected results and I found that as the conversation gets longer the chance of getting the correct next agent reduced fast. I am proposing an option to replace the message history for speakers with a simple message (that should be able to be customised) that includes the agent name and the fact that they have spoken. From my testing, this helped the alt-model consistently understand the sequence of agents so far and determine who would be next, even as the conversation got longer. This may work best when the actual content of the conversation is not important for speaker selection and it's based more on who has spoken before/last.
File: groupchat.py
Function: def select_speaker(self, last_speaker: Agent, selector: ConversableAgent) -> Agent:
Change from:
# auto speaker selection
selector.update_system_message(self.select_speaker_msg(agents))
final, name = selector.generate_oai_reply(messages)
To (should be with a passed in parameter for message rather than hard-coded like this):
shorter_messages = messages.copy()
for message in shorter_messages:
if 'name' in message and not message['name'] == 'userproxy':
message['content'] = f"I am {message['name']} and I have spoken."
final, name = selector.generate_oai_reply(shorter_messages)
Suggestion 4 - Requery when select speaker response contains more than one name
It is difficult to get some alt-models to return a single word response (such as an agent name) and I found that you need to ask them a broader question so they provide more consideration rather than just asking for a word as that resulted in the wrong selection the majority of the time. I am proposing the ability to take the LLMs response to the select speaker prompt when it has more than one detected agent name in it and ask the LLM to pick the name from that response - this worked well during my testing.
Here's an example of a response to the select speaker prompt that would be picked up and run through the LLM to pick the next agent:
The next role in the sequence to speak would be 'Negative_Constructive_Debater'. This role is responsible for providing constructive feedback and counterpoints to enhance discussions. They should analyze the validity of arguments presented by the Affirmative Constructive Debater and offer alternative perspectives or corrections where necessary.
File: groupchat.py
Function: def _finalize_speaker(self, last_speaker: Agent, final: bool, name: str, agents: Optional[List[Agent]]) -> Agent:
Change from:
def _finalize_speaker(self, last_speaker: Agent, final: bool, name: str, agents: Optional[List[Agent]]) -> Agent:
To:
def _finalize_speaker(self, last_speaker: Agent, final: bool, name: str, selector: ConversableAgent, agents: Optional[List[Agent]]) -> Agent:
Change from:
if len(mentions) == 1:
name = next(iter(mentions))
To (perhaps with the ability to customise the requery prompt):
if len(mentions) == 1:
name = next(iter(mentions))
# For when we have more than one name, requery the LLM with the response to pick the single agent
elif len(mentions) > 1:
select_name_query = [
{
'content': f'Respond with just the name of the next speaker\'s name identified by this text: {name}',
'role' : 'system'
}
]
# Requery with just the latest reply and try and get it to narrow it down to a single agent name.
final_single, name_single = selector.generate_oai_reply(select_name_query)
# Try again to see if we've got just one name matching
mentions = self._mentioned_agents(name_single, agents)
if len(mentions) == 1:
name = next(iter(mentions))
logger.warning(f"[Note: Successfully requeried initial speaker selection response to narrow to one agent: {name}]")
else:
logger.warning(
f"GroupChat select_speaker failed to resolve the next speaker's name. Multiple names in original reply and re-queried but unable to successfully identify a single agent. Original agent selection reply: {name}\nRequeried agent selection reply: {name_single}"
)
In functions select_speaker and a_select_speaker, change from:
return self._finalize_speaker(last_speaker, final, name, agents)
To:
return self._finalize_speaker(last_speaker, final, name, selector, agents)
Suggestion 5 - Flexible terminate string handling
As per the above notes, getting a single word response from these LLMs is tricky, it's more typical for them to provide a sentence or paragraph and put TERMINATE somewhere in there (if we can get one at all!). To help the situation I'm proposing that we allow TERMINATE (case-sensitive) to be anywhere in the response. Furthermore, it may be good to consider the ability to change the terminate keyword as I have found that this word has a negative connotation and some LLMs don't like to output it at all.
File: conversable_agent.py
Function: __init__
Change from:
else (lambda x: content_str(x.get("content")) == "TERMINATE")
To:
else (lambda x: "TERMINATE" in content_str(x.get("content")))
Final notes
-
The above suggestions provide flexibility and are designed to accommodate the shortcomings of smaller/local/public models. The ability to turn these on/off would help avoid changing any existing work by others.
-
I considered proposing a new group chat class specifically for alt-models but I'm hoping this isn't necessary as it would make the codebase harder to maintain if there were two similar classes.
-
My testing has been focused on a debating scenario utilising group chat - it hasn't been used with Skills or function calling so I don't know if the proposed changes work in those scenarios.
-
Thank you!
@microsoft-github-policy-service agree
@microsoft-github-policy-service agree
From: microsoft-github-policy-service[bot] @.> Sent: Wednesday, February 21, 2024 10:25 PM To: microsoft/autogen @.> Cc: Mark Sze @.>; Mention @.> Subject: Re: [microsoft/autogen] Changes to Group Chat for better alt-model (aka Local LLM) support (PR #1746)
@marklyszehttps://github.com/marklysze please read the following Contributor License Agreement(CLA). If you agree with the CLA, please reply with the following information.
@microsoft-github-policy-service agree [company="{your company}"]
Options:
- (default - no company specified) I have sole ownership of intellectual property rights to my Submissions and I am not making Submissions in the course of work for my employer.
@microsoft-github-policy-service agree
- (when company given) I am making Submissions in the course of work for my employer (or my employer has intellectual property rights in my Submissions by contract or applicable law). I have permission from my employer to make Submissions and enter into this Agreement on behalf of my employer. By signing below, the defined term “You” includes me and my employer.
@microsoft-github-policy-service agree company="Microsoft" Contributor License Agreement Contribution License Agreement
This Contribution License Agreement (“Agreement”) is agreed to by the party signing below (“You”), and conveys certain license rights to Microsoft Corporation and its affiliates (“Microsoft”) for Your contributions to Microsoft open source projects. This Agreement is effective as of the latest signature date below.
- Definitions. “Code” means the computer software code, whether in human-readable or machine-executable form, that is delivered by You to Microsoft under this Agreement. “Project” means any of the projects owned or managed by Microsoft and offered under a license approved by the Open Source Initiative (www.opensource.orghttp://www.opensource.org). “Submit” is the act of uploading, submitting, transmitting, or distributing code or other content to any Project, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Project for the purpose of discussing and improving that Project, but excluding communication that is conspicuously marked or otherwise designated in writing by You as “Not a Submission.” “Submission” means the Code and any other copyrightable material Submitted by You, including any associated comments and documentation.
- Your Submission. You must agree to the terms of this Agreement before making a Submission to any Project. This Agreement covers any and all Submissions that You, now or in the future (except as described in Section 4 below), Submit to any Project.
- Originality of Work. You represent that each of Your Submissions is entirely Your original work. Should You wish to Submit materials that are not Your original work, You may Submit them separately to the Project if You (a) retain all copyright and license information that was in the materials as You received them, (b) in the description accompanying Your Submission, include the phrase “Submission containing materials of a third party:” followed by the names of the third party and any licenses or other restrictions of which You are aware, and (c) follow any other instructions in the Project’s written guidelines concerning Submissions.
- Your Employer. References to “employer” in this Agreement include Your employer or anyone else for whom You are acting in making Your Submission, e.g. as a contractor, vendor, or agent. If Your Submission is made in the course of Your work for an employer or Your employer has intellectual property rights in Your Submission by contract or applicable law, You must secure permission from Your employer to make the Submission before signing this Agreement. In that case, the term “You” in this Agreement will refer to You and the employer collectively. If You change employers in the future and desire to Submit additional Submissions for the new employer, then You agree to sign a new Agreement and secure permission from the new employer before Submitting those Submissions.
- Licenses.
- Copyright License. You grant Microsoft, and those who receive the Submission directly or indirectly from Microsoft, a perpetual, worldwide, non-exclusive, royalty-free, irrevocable license in the Submission to reproduce, prepare derivative works of, publicly display, publicly perform, and distribute the Submission and such derivative works, and to sublicense any or all of the foregoing rights to third parties.
- Patent License. You grant Microsoft, and those who receive the Submission directly or indirectly from Microsoft, a perpetual, worldwide, non-exclusive, royalty-free, irrevocable license under Your patent claims that are necessarily infringed by the Submission or the combination of the Submission with the Project to which it was Submitted to make, have made, use, offer to sell, sell and import or otherwise dispose of the Submission alone or with the Project.
- Other Rights Reserved. Each party reserves all rights not expressly granted in this Agreement. No additional licenses or rights whatsoever (including, without limitation, any implied licenses) are granted by implication, exhaustion, estoppel or otherwise.
- Representations and Warranties. You represent that You are legally entitled to grant the above licenses. You represent that each of Your Submissions is entirely Your original work (except as You may have disclosed under Section 3). You represent that You have secured permission from Your employer to make the Submission in cases where Your Submission is made in the course of Your work for Your employer or Your employer has intellectual property rights in Your Submission by contract or applicable law. If You are signing this Agreement on behalf of Your employer, You represent and warrant that You have the necessary authority to bind the listed employer to the obligations contained in this Agreement. You are not expected to provide support for Your Submission, unless You choose to do so. UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING, AND EXCEPT FOR THE WARRANTIES EXPRESSLY STATED IN SECTIONS 3, 4, AND 6, THE SUBMISSION PROVIDED UNDER THIS AGREEMENT IS PROVIDED WITHOUT WARRANTY OF ANY KIND, INCLUDING, BUT NOT LIMITED TO, ANY WARRANTY OF NONINFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE.
- Notice to Microsoft. You agree to notify Microsoft in writing of any facts or circumstances of which You later become aware that would make Your representations in this Agreement inaccurate in any respect.
- Information about Submissions. You agree that contributions to Projects and information about contributions may be maintained indefinitely and disclosed publicly, including Your name and other information that You submit with Your Submission.
- Governing Law/Jurisdiction. This Agreement is governed by the laws of the State of Washington, and the parties consent to exclusive jurisdiction and venue in the federal courts sitting in King County, Washington, unless no federal subject matter jurisdiction exists, in which case the parties consent to exclusive jurisdiction and venue in the Superior Court of King County, Washington. The parties waive all defenses of lack of personal jurisdiction and forum non-conveniens.
- Entire Agreement/Assignment. This Agreement is the entire agreement between the parties, and supersedes any and all prior agreements, understandings or communications, written or oral, between the parties relating to the subject matter hereof. This Agreement may be assigned by Microsoft.
— Reply to this email directly, view it on GitHubhttps://github.com/microsoft/autogen/pull/1746#issuecomment-1956442673, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AP2JV4WBQWAFRCUNYUSLEN3YUXKRLAVCNFSM6AAAAABDS3PU6GVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSNJWGQ2DENRXGM. You are receiving this because you were mentioned.Message ID: @.@.>>
Thanks for the PR! The termination string makes sense to me for many non-openai LLMs. Can others comment on the changes made to speaker selection?
Thanks for the PR! The termination string makes sense to me for many non-openai LLMs. Can others comment on the changes made to speaker selection?
Thanks for the PR. I'm not sure about the hard-coded examples in autogen/agentchat/groupchat.py. @ekzhu, I suspect it might be better to instead: (1) Try to run the existing autogen tests with alt-models and see where are the gaps, are they as per reported in this PR? (2) Change the hard-coded examples to dynamic code in this PR. (3) Ensure that the test passes with both alt-models and openai to prevent breaking openai whilst 'fixing' alt-models.
Thanks for your suggestions @joshkyh, I'll jump in and say I'm happy to try running the tests with the alt-models to identify gaps, if someone can point me in the right direction on how to run them.
If the tests cover end-to-end scenario workflows (e.g. like the debating workflow I used for testing) that would highlight the gaps.
Thanks for your suggestions @joshkyh, I'll jump in and say I'm happy to try running the tests with the alt-models to identify gaps, if someone can point me in the right direction on how to run them.
No worries, the tests can be found here https://github.com/microsoft/autogen/tree/main/test
Though I think the most relevant ones should be in https://github.com/microsoft/autogen/blob/main/test/agentchat/test_groupchat.py
https://github.com/microsoft/autogen/blob/main/.github/workflows/openai.yml and https://github.com/microsoft/autogen/blob/main/.github/workflows/contrib-openai.yml specify the tests that require a openai endpoint, which in your case, would be swapped to a local LLM endpoint.
I'm not sure whether the maintainers would be keen to include local LLMs in the test workflows, would need to ask @ekzhu - I don't know who is the lead for alt-models related issues.
Thanks for your suggestions @joshkyh, I'll jump in and say I'm happy to try running the tests with the alt-models to identify gaps, if someone can point me in the right direction on how to run them.
No worries, the tests can be found here https://github.com/microsoft/autogen/tree/main/test
Though I think the most relevant ones should be in https://github.com/microsoft/autogen/blob/main/test/agentchat/test_groupchat.py
https://github.com/microsoft/autogen/blob/main/.github/workflows/openai.yml and https://github.com/microsoft/autogen/blob/main/.github/workflows/contrib-openai.yml specify the tests that require a openai endpoint, which in your case, would be swapped to a local LLM endpoint.
I'm not sure whether the maintainers would be keen to include local LLMs in the test workflows, would need to ask @ekzhu - I don't know who is the lead for
alt-modelsrelated issues.
Thanks, I'll have a look through and try and test against a set of local LLMs.
@joshkyh @marklysze you can ping @olgavrou @BeibinLi and me on non-open ai related works.
Codecov Report
Attention: Patch coverage is 57.69231% with 11 lines in your changes are missing coverage. Please review.
Project coverage is 47.32%. Comparing base (
f63f056) to head (2f58b0f).
| Files | Patch % | Lines |
|---|---|---|
| autogen/agentchat/groupchat.py | 57.69% | 11 Missing :warning: |
Additional details and impacted files
@@ Coverage Diff @@
## main #1746 +/- ##
==========================================
+ Coverage 37.53% 47.32% +9.78%
==========================================
Files 77 77
Lines 7710 7728 +18
Branches 1655 1793 +138
==========================================
+ Hits 2894 3657 +763
+ Misses 4573 3746 -827
- Partials 243 325 +82
| Flag | Coverage Δ | |
|---|---|---|
| unittest | 14.18% <7.69%> (?) |
|
| unittests | 46.24% <57.69%> (+8.72%) |
:arrow_up: |
Flags with carried forward coverage won't be shown. Click here to find out more.
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
@ekzhu and team, I'll close this, as the following changes address my proposed changes:
- Suggestion 1 - Expand agent name with underscore matching - addressed
- Suggestion 2 - Customised select speaker prompts - addressed
- Suggestion 3 - Option to simplify speaker content history for speaker selection - addressed through TransformMessages
- Suggestion 4 - Requery when select speaker response contains more than one name - addressed through nested select speaker chat
- Suggestion 5 - Flexible terminate string handling - already existed :)
I think things have moved along substantially for smaller LLMs and group chat in the last 3 months... still more to do, but great to have these covered.
Thank you @marklysze for your fantastic contributions!