[Bug]: `AgentOptimizer` error when using `LLMConfig`
Describe the bug
When an LLMConfig object is passed as argument of the AgentOptimizer constructor, it is not accessed. This is because the constructor handles LLMConfig as a dict:
if self.llm_config in [{}, {"config_list": []}, {"config_list": [{"model": ""}]}]:
raise ValueError(
"When using OpenAI or Azure OpenAI endpoints, specify a non-empty 'model' either in 'llm_config' or in each config of 'config_list'."
)
self.llm_config["config_list"] = filter_config(llm_config["config_list"], {"model": [self.optimizer_model]})
self._client = OpenAIWrapper(**self.llm_config)
This issue is related to #1774, and to address both I think that ConversableAgent's _validate_llm_config method should be classmethod of LLMConfig in order to generalize it for use cases outside agents.
Steps to reproduce
from autogen import LLMConfig
from autogen.agentchat.contrib.agent_optimizer import AgentOptimizer
llm_config = LLMConfig.from_json(path="OAI_CONFIG_LIST")
optimizer = AgentOptimizer(llm_config=llm_config, max_actions_per_step=3, optimizer_model="gpt-4o")
Raises:
OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable
Hi @giorgossideris: Thanks for flagging this issue and sharing your suggestion.
@marklysze and @davorrunje, would appreciate your thoughts on the proposed approach. @giorgossideris, if the feedback looks good, would you be open to submitting a PR with the fix?
cc @kumaranvpl
@skzhang1 Please take a look!
Related to #1946