[Issue]: IndexError: list index out of range
Describe the issue
My python version: 3.11
When i run the code from the notebook agentchat_auto_feedback_from_code_execution.ipynb.
# create an AssistantAgent named "assistant"
assistant = autogen.AssistantAgent(
name="assistant",
llm_config={
"cache_seed": 41, # seed for caching and reproducibility
"config_list": config_list, # a list of OpenAI API configurations
"temperature": 0, # temperature for sampling
}, # configuration for autogen's enhanced inference API which is compatible with OpenAI API
)
# create a UserProxyAgent instance named "user_proxy"
user_proxy = autogen.UserProxyAgent(
name="user_proxy",
human_input_mode="NEVER",
max_consecutive_auto_reply=10,
is_termination_msg=lambda x: x.get("content", "").rstrip().endswith("TERMINATE"),
code_execution_config={
"work_dir": "coding",
"use_docker": False, # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.
},
)
# the assistant receives a message from the user_proxy, which contains the task description
chat_res = user_proxy.initiate_chat(
assistant,
message="""What date is today? Compare the year-to-date gain for META and TESLA.""",
summary_method="reflection_with_llm",
)
Then i got the error message:
Traceback (most recent call last):
File "/home/yongxiangchen69/develop/myproject/my_autogen/first_agent.py", line 37, in <module>
chat_res = user_proxy.initiate_chat(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yongxiangchen69/.local/lib/python3.11/site-packages/autogen/agentchat/conversable_agent.py", line 973, in initiate_chat
self.send(msg2send, recipient, silent=silent)
File "/home/yongxiangchen69/.local/lib/python3.11/site-packages/autogen/agentchat/conversable_agent.py", line 620, in send
recipient.receive(message, self, request_reply, silent)
File "/home/yongxiangchen69/.local/lib/python3.11/site-packages/autogen/agentchat/conversable_agent.py", line 781, in receive
self.send(reply, sender, silent=silent)
File "/home/yongxiangchen69/.local/lib/python3.11/site-packages/autogen/agentchat/conversable_agent.py", line 620, in send
recipient.receive(message, self, request_reply, silent)
File "/home/yongxiangchen69/.local/lib/python3.11/site-packages/autogen/agentchat/conversable_agent.py", line 781, in receive
self.send(reply, sender, silent=silent)
File "/home/yongxiangchen69/.local/lib/python3.11/site-packages/autogen/agentchat/conversable_agent.py", line 620, in send
recipient.receive(message, self, request_reply, silent)
File "/home/yongxiangchen69/.local/lib/python3.11/site-packages/autogen/agentchat/conversable_agent.py", line 781, in receive
self.send(reply, sender, silent=silent)
File "/home/yongxiangchen69/.local/lib/python3.11/site-packages/autogen/agentchat/conversable_agent.py", line 620, in send
recipient.receive(message, self, request_reply, silent)
File "/home/yongxiangchen69/.local/lib/python3.11/site-packages/autogen/agentchat/conversable_agent.py", line 781, in receive
self.send(reply, sender, silent=silent)
File "/home/yongxiangchen69/.local/lib/python3.11/site-packages/autogen/agentchat/conversable_agent.py", line 620, in send
recipient.receive(message, self, request_reply, silent)
File "/home/yongxiangchen69/.local/lib/python3.11/site-packages/autogen/agentchat/conversable_agent.py", line 779, in receive
reply = self.generate_reply(messages=self.chat_messages[sender], sender=sender)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yongxiangchen69/.local/lib/python3.11/site-packages/autogen/agentchat/conversable_agent.py", line 1862, in generate_reply
final, reply = reply_func(self, messages=messages, sender=sender, config=reply_func_tuple["config"])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yongxiangchen69/.local/lib/python3.11/site-packages/autogen/agentchat/conversable_agent.py", line 1261, in generate_oai_reply
extracted_response = self._generate_oai_reply_from_client(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yongxiangchen69/.local/lib/python3.11/site-packages/autogen/agentchat/conversable_agent.py", line 1285, in _generate_oai_reply_from_client
extracted_response = llm_client.extract_text_or_completion_object(response)[0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^
IndexError: list index out of range
I confirm that the request is not being sent to the LLM model because I use the local model and no request logs were found. How to solve it?
Steps to reproduce
- pip install pyautogen
- run code on the notebook
agentchat_auto_feedback_from_code_execution.ipynb
Screenshots and logs
Additional Information
No response
The local model you are using may not support empty messages in the list of messages. The UserProxyAgent sends a default empty message when no code is detected. In this case it didn't detect the single line code block. Try to set the default_reply of UserProxyAgent to a different msg, for example, "no code is found". If it solves your problem, I'd appreciate the answer to be added to FAQ or tutorials. cc @ekzhu @jackgerrits
Thank @sonichi for your reply ! Following your answer, i set default_auto_reply at UserProxyAgent:
# create a UserProxyAgent instance named "user_proxy"
user_proxy = autogen.UserProxyAgent(
name="user_proxy",
human_input_mode="NEVER",
max_consecutive_auto_reply=10,
default_auto_reply="no code is found",
is_termination_msg=lambda x: x.get("content", "").rstrip().endswith("TERMINATE"),
code_execution_config={
"work_dir": "coding",
"use_docker": False, # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.
},
)
It not raises IndexError again.
But, the process like enters into one loop, answers and questions repeat again and again.
Is it caused by my default_auto_reply setting? Is it caused by my local model ?
The local model you are using may not support empty messages in the list of messages. The UserProxyAgent sends a default empty message when no code is detected. In this case it didn't detect the single line code block. Try to set the default_reply of UserProxyAgent to a different msg, for example, "no code is found". If it solves your problem, I'd appreciate the answer to be added to FAQ or tutorials. cc @ekzhu @jackgerrits
Why is the IndexError caused by my local model that it not supports empty messages? I think it is not the root cause, because my local model does not receive any requests. Is there something wrong with my understanding?
I just tried to run the code under 'no code execution' in quick start found here. And I got the same error.
I also encountered the error @MaveriQ mentioned when running the sample. Tried version 0.2.36 and 0.2.37
(Came back to say that it did work when switching to 0.2.35)
@MaveriQ @dokwasny
guys, when you say "I got the same error", can you also post your model (local, remote, which API, version), as well as your code snippet to reproduce the error.
Otherwise we don't know what to do with it.
We don't have access to every possible model APIs and local models.
Thanks,
Same error here.
autogen version: 0.2.37
python version: 3.12.7
Tried to run following code snippet from the getting started documentation:
import os
from autogen import AssistantAgent, UserProxyAgent
llm_config = {"model": "gpt-4", "api_key": os.environ["OPENAI_API_KEY"]}
assistant = AssistantAgent("assistant", llm_config=llm_config)
user_proxy = UserProxyAgent("user_proxy", code_execution_config=False)
# Start the chat
user_proxy.initiate_chat(
assistant,
message="Tell me a joke about NVDA and TESLA stock prices.",
)
Getting error:
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
Cell In[36], line 15
12 user_proxy = UserProxyAgent("user_proxy", code_execution_config=False)
14 # Start the chat
---> 15 user_proxy.initiate_chat(
16 assistant,
17 message="Tell me a joke about NVDA and TESLA stock prices.",
18 )
File ~/dev/.venv/lib/python3.12/site-packages/autogen/agentchat/conversable_agent.py:1114, in ConversableAgent.initiate_chat(self, recipient, clear_history, silent, cache, max_turns, summary_method, summary_args, message, **kwargs)
1112 else:
1113 msg2send = self.generate_init_message(message, **kwargs)
-> 1114 self.send(msg2send, recipient, silent=silent)
1115 summary = self._summarize_chat(
1116 summary_method,
1117 summary_args,
1118 recipient,
1119 cache=cache,
1120 )
1121 for agent in [self, recipient]:
File ~/dev/.venv/lib/python3.12/site-packages/autogen/agentchat/conversable_agent.py:748, in ConversableAgent.send(self, message, recipient, request_reply, silent)
746 valid = self._append_oai_message(message, "assistant", recipient, is_sending=True)
747 if valid:
--> 748 recipient.receive(message, self, request_reply, silent)
749 else:
750 raise ValueError(
751 "Message can't be converted into a valid ChatCompletion message. Either content or function_call must be provided."
752 )
File ~/dev/.venv/lib/python3.12/site-packages/autogen/agentchat/conversable_agent.py:914, in ConversableAgent.receive(self, message, sender, request_reply, silent)
912 if request_reply is False or request_reply is None and self.reply_at_receive[sender] is False:
913 return
--> 914 reply = self.generate_reply(messages=self.chat_messages[sender], sender=sender)
915 if reply is not None:
916 self.send(reply, sender, silent=silent)
File ~/dev/.venv/lib/python3.12/site-packages/autogen/agentchat/conversable_agent.py:2068, in ConversableAgent.generate_reply(self, messages, sender, **kwargs)
2066 continue
2067 if self._match_trigger(reply_func_tuple["trigger"], sender):
-> 2068 final, reply = reply_func(self, messages=messages, sender=sender, config=reply_func_tuple["config"])
2069 if logging_enabled():
2070 log_event(
2071 self,
2072 "reply_func_executed",
(...)
2076 reply=reply,
2077 )
File ~/dev/.venv/lib/python3.12/site-packages/autogen/agentchat/conversable_agent.py:1436, in ConversableAgent.generate_oai_reply(self, messages, sender, config)
1434 if messages is None:
1435 messages = self._oai_messages[sender]
-> 1436 extracted_response = self._generate_oai_reply_from_client(
1437 client, self._oai_system_message + messages, self.client_cache
1438 )
1439 return (False, None) if extracted_response is None else (True, extracted_response)
File ~/dev/.venv/lib/python3.12/site-packages/autogen/agentchat/conversable_agent.py:1455, in ConversableAgent._generate_oai_reply_from_client(self, llm_client, messages, cache)
1452 all_messages.append(message)
1454 # TODO: #1143 handle token limit exceeded error
-> 1455 response = llm_client.create(
1456 context=messages[-1].pop("context", None), messages=all_messages, cache=cache, agent=self
1457 )
1458 extracted_response = llm_client.extract_text_or_completion_object(response)[0]
1460 if extracted_response is None:
File ~/dev/.venv/lib/python3.12/site-packages/autogen/oai/client.py:775, in OpenAIWrapper.create(self, **config)
773 continue # filter is not passed; try the next config
774 try:
--> 775 self._throttle_api_calls(i)
776 request_ts = get_current_ts()
777 response = client.create(params)
File ~/dev/.venv/lib/python3.12/site-packages/autogen/oai/client.py:1072, in OpenAIWrapper._throttle_api_calls(self, idx)
1070 def _throttle_api_calls(self, idx: int) -> None:
1071 """Rate limit api calls."""
-> 1072 if self._rate_limiters[idx]:
1073 limiter = self._rate_limiters[idx]
1075 assert limiter is not None
IndexError: list index out of range
@moryachok
I hit this as well.
- pip install autogen-agentchat
- run the very first getting started example shown here: https://microsoft.github.io/autogen/0.2/docs/Getting-Started
- error
IndexError: list index out of range
After navigating the code I found the problem ultimately comes from this line (where the rate limiters are only initialised if the config_list parameter is present):
https://github.com/microsoft/autogen/blob/610388945b6da3a1cc0b70300a2225ad53d6b8f2/autogen/oai/client.py#L442
FIX
So the workaround for this bug is to simply supply the llm_config with a config_list dictionary key as:
llm_config = {
"config_list": [
{
"model": "gpt-4",
"api_key": os.environ.get("OPENAI_API_KEY"),
},
]}
Same error here.
autogen version:
0.2.37python version:3.12.7Tried to run following code snippet from the getting started documentation:
import os from autogen import AssistantAgent, UserProxyAgent
llm_config = {"model": "gpt-4", "api_key": os.environ["OPENAI_API_KEY"]} assistant = AssistantAgent("assistant", llm_config=llm_config) user_proxy = UserProxyAgent("user_proxy", code_execution_config=False)
Start the chat
user_proxy.initiate_chat( assistant, message="Tell me a joke about NVDA and TESLA stock prices.", ) Getting error:
--------------------------------------------------------------------------- IndexError Traceback (most recent call last) Cell In[36], line 15 12 user_proxy = UserProxyAgent("user_proxy", code_execution_config=False) 14 # Start the chat ---> 15 user_proxy.initiate_chat( 16 assistant, 17 message="Tell me a joke about NVDA and TESLA stock prices.", 18 ) File ~/dev/.venv/lib/python3.12/site-packages/autogen/agentchat/conversable_agent.py:1114, in ConversableAgent.initiate_chat(self, recipient, clear_history, silent, cache, max_turns, summary_method, summary_args, message, **kwargs) 1112 else: 1113 msg2send = self.generate_init_message(message, **kwargs) -> 1114 self.send(msg2send, recipient, silent=silent) 1115 summary = self._summarize_chat( 1116 summary_method, 1117 summary_args, 1118 recipient, 1119 cache=cache, 1120 ) 1121 for agent in [self, recipient]: File ~/dev/.venv/lib/python3.12/site-packages/autogen/agentchat/conversable_agent.py:748, in ConversableAgent.send(self, message, recipient, request_reply, silent) 746 valid = self._append_oai_message(message, "assistant", recipient, is_sending=True) 747 if valid: --> 748 recipient.receive(message, self, request_reply, silent) 749 else: 750 raise ValueError( 751 "Message can't be converted into a valid ChatCompletion message. Either content or function_call must be provided." 752 ) File ~/dev/.venv/lib/python3.12/site-packages/autogen/agentchat/conversable_agent.py:914, in ConversableAgent.receive(self, message, sender, request_reply, silent) 912 if request_reply is False or request_reply is None and self.reply_at_receive[sender] is False: 913 return --> 914 reply = self.generate_reply(messages=self.chat_messages[sender], sender=sender) 915 if reply is not None: 916 self.send(reply, sender, silent=silent) File ~/dev/.venv/lib/python3.12/site-packages/autogen/agentchat/conversable_agent.py:2068, in ConversableAgent.generate_reply(self, messages, sender, **kwargs) 2066 continue 2067 if self._match_trigger(reply_func_tuple["trigger"], sender): -> 2068 final, reply = reply_func(self, messages=messages, sender=sender, config=reply_func_tuple["config"]) 2069 if logging_enabled(): 2070 log_event( 2071 self, 2072 "reply_func_executed", (...) 2076 reply=reply, 2077 ) File ~/dev/.venv/lib/python3.12/site-packages/autogen/agentchat/conversable_agent.py:1436, in ConversableAgent.generate_oai_reply(self, messages, sender, config) 1434 if messages is None: 1435 messages = self._oai_messages[sender] -> 1436 extracted_response = self._generate_oai_reply_from_client( 1437 client, self._oai_system_message + messages, self.client_cache 1438 ) 1439 return (False, None) if extracted_response is None else (True, extracted_response) File ~/dev/.venv/lib/python3.12/site-packages/autogen/agentchat/conversable_agent.py:1455, in ConversableAgent._generate_oai_reply_from_client(self, llm_client, messages, cache) 1452 all_messages.append(message) 1454 # TODO: #1143 handle token limit exceeded error -> 1455 response = llm_client.create( 1456 context=messages[-1].pop("context", None), messages=all_messages, cache=cache, agent=self 1457 ) 1458 extracted_response = llm_client.extract_text_or_completion_object(response)[0] 1460 if extracted_response is None: File ~/dev/.venv/lib/python3.12/site-packages/autogen/oai/client.py:775, in OpenAIWrapper.create(self, **config) 773 continue # filter is not passed; try the next config 774 try: --> 775 self._throttle_api_calls(i) 776 request_ts = get_current_ts() 777 response = client.create(params) File ~/dev/.venv/lib/python3.12/site-packages/autogen/oai/client.py:1072, in OpenAIWrapper._throttle_api_calls(self, idx) 1070 def _throttle_api_calls(self, idx: int) -> None: 1071 """Rate limit api calls.""" -> 1072 if self._rate_limiters[idx]: 1073 limiter = self._rate_limiters[idx] 1075 assert limiter is not None IndexError: list index out of range
I have the same error, do you find the answer? Thanks
yes @huapsy I posted the fix above!
Just do this:
llm_config = {
"config_list": [
{
"model": "gpt-4",
"api_key": os.environ.get("OPENAI_API_KEY"),
},
]}