Mark Sze
Mark Sze
@wenngong, thanks for your review, I've updated accordingly. @Hk669, I'll review tests again when you have had a chance to note which ones can be removed.
Thanks for approving @wenngong, @Hk669 - are you happy to keep tests as is or would you like to have some removed? If you are happy to keep as is...
> looks good to me👍, thanks for the efforts @marklysze Thanks @Hk669! I'll approve on your behalf :)
Hey @yockgen, thanks for raising this. In the code example, you should not need to put the functions in the llm_config dictionary yourself. You noted that the `initiate_chat` worked fine...
Thanks for creating this @tejas-dharani. I think we should avoid changing the speaker selection template prompt as I believe it can be changed through the API and changing the default...
@randombet do you have any further comments on this?
The current implementation, where an agent recommends a tool and the next agent receives and processes that request has been in the code base for a very long time so...
> > If we change `generate_reply` then we have to be careful that the tool call message may not be applied to the messages list _before_ we execute the tool,...
Sample code (use different LLMs for different agents as well): ``` from autogen import ConversableAgent from autogen.agents.experimental import EvaluationAgent llm_config = {"api_type": "openai", "model": "gpt-4o-mini"} # Our agents that will...
Here's an example of the compiled responses that will then go to the internal evaluator agent for selection: ``` evaluation_user (to evaluationagent_evaluator): AGENT 'gpt_4o_mini_agent' RESPONSE: The sky appears blue due...