Can't run simulation "TOKEN LIMIT EXCEEDED" error
I installed it all correctly, everything seems to work, but when I try to run it, even with step count to 1, it just prints "TOKEN LIMIT EXCEEDED". I'm not sure where the problem is, this OpenAI changes something that broke the code? I have a paid API with usage tier 3, so I think it was supposed to work, I tracked the API key the code is using but it seems that It's not even calling for the GPT3, it just makes calls for the "Text-embedding-ada-002-v2", if the OpenAI API tracking is accurate it didn't make any calls for the GPT API.
Anyone was able to run this recently? Anyone has any idea of what could e the cause of this error?
Edit: Also, it always ends in an error before asking to enter an option again:
Today is February 13, 2023. From 00:00AM ~ 00:00AM, Isabella Rodriguez is planning on TOKEN LIMIT EXCEEDED.
In 5 min increments, list the subtasks Isabella does when Isabella is TOKEN LIMIT EXCEEDED from (total duration in minutes 1440):
1) Isabella is
TOKEN LIMIT EXCEEDED
TOODOOOOOO
TOKEN LIMIT EXCEEDED
-==- -==- -==-
TOODOOOOOO
TOKEN LIMIT EXCEEDED
-==- -==- -==-
Traceback (most recent call last):
File "/Users/tiberio/Documents/Cesar School (Mac)/TCC/generative_agents/reverie/backend_server/reverie.py", line 468, in open_server
rs.start_server(int_count)
File "/Users/tiberio/Documents/Cesar School (Mac)/TCC/generative_agents/reverie/backend_server/reverie.py", line 379, in start_server
next_tile, pronunciatio, description = persona.move(
File "/Users/tiberio/Documents/Cesar School (Mac)/TCC/generative_agents/reverie/backend_server/persona/persona.py", line 222, in move
plan = self.plan(maze, personas, new_day, retrieved)
File "/Users/tiberio/Documents/Cesar School (Mac)/TCC/generative_agents/reverie/backend_server/persona/persona.py", line 148, in plan
return plan(self, maze, personas, new_day, retrieved)
File "/Users/tiberio/Documents/Cesar School (Mac)/TCC/generative_agents/reverie/backend_server/persona/cognitive_modules/plan.py", line 959, in plan
_determine_action(persona, maze)
File "/Users/tiberio/Documents/Cesar School (Mac)/TCC/generative_agents/reverie/backend_server/persona/cognitive_modules/plan.py", line 573, in _determine_action
generate_task_decomp(persona, act_desp, act_dura))
File "/Users/tiberio/Documents/Cesar School (Mac)/TCC/generative_agents/reverie/backend_server/persona/cognitive_modules/plan.py", line 164, in generate_task_decomp
return run_gpt_prompt_task_decomp(persona, task, duration)[0]
File "/Users/tiberio/Documents/Cesar School (Mac)/TCC/generative_agents/reverie/backend_server/persona/prompt_template/run_gpt_prompt.py", line 439, in run_gpt_prompt_task_decomp
output = safe_generate_response(prompt, gpt_param, 5, get_fail_safe(),
File "/Users/tiberio/Documents/Cesar School (Mac)/TCC/generative_agents/reverie/backend_server/persona/prompt_template/gpt_structure.py", line 268, in safe_generate_response
return func_clean_up(curr_gpt_response, prompt=prompt)
File "/Users/tiberio/Documents/Cesar School (Mac)/TCC/generative_agents/reverie/backend_server/persona/prompt_template/run_gpt_prompt.py", line 378, in __func_clean_up
duration = int(k[1].split(",")[0].strip())
IndexError: list index out of range
Error.
Enter option:
I installed it all correctly, everything seems to work, but when I try to run it, even with step count to 1, it just prints "TOKEN LIMIT EXCEEDED". I'm not sure where the problem is, this OpenAI changes something that broke the code? I have a paid API with usage tier 3, so I think it was supposed to work, I tracked the API key the code is using but it seems that It's not even calling for the GPT3, it just makes calls for the "Text-embedding-ada-002-v2", if the OpenAI API tracking is accurate it didn't make any calls for the GPT API.
Anyone was able to run this recently? Anyone has any idea of what could e the cause of this error?
Mine runs for a little while. I will re-run it in the next few days, and can show you how far mine gets. Seems less of an error, and maybe tied to OpenAI limits I may change. But glad to hear I am not the only one with the issue, because I thought there may have been something wrong with my code.
I have the exact same issue
I solved the problem. I will list the solution here and make a PR later (although author seems to be inactive nowadays to review PRs)
The issue is simple, on Jan 4th 2024 OpenAI stopped all models with names such as "text-davinci-00x" (x=123). However, since model names are hard coded in Generative Agents all over the place instead of a single setup file, that means you need to search through the entire project for key word "davinci" and replace all of them to "gpt-3.5-turbo-instruct" (see: https://platform.openai.com/docs/deprecations)
And also a note why this error is hard to spot is because original version assumed that the only error is from token out of bound (hard coded bounds); they did not expect models get replaced. I found this error by replacing the TRY-EXCEPT structure and print the exception message directly instead of printing a hard coded TOKEN LIMIT EXCEEDED msg.
I have changed all the "text-davinci-00x" to "gpt-3.5-turbo-instruct", but still receive the TOKEN LIMIT EXCEEDED msg. Why is that?
I installed it all correctly, everything seems to work, but when I try to run it, even with step count to 1, it just prints "TOKEN LIMIT EXCEEDED". I'm not sure where the problem is, this OpenAI changes something that broke the code? I have a paid API with usage tier 3, so I think it was supposed to work, I tracked the API key the code is using but it seems that It's not even calling for the GPT3, it just makes calls for the "Text-embedding-ada-002-v2", if the OpenAI API tracking is accurate it didn't make any calls for the GPT API.
Anyone was able to run this recently? Anyone has any idea of what could e the cause of this error?
Edit: Also, it always ends in an error before asking to enter an option again:
Today is February 13, 2023. From 00:00AM ~ 00:00AM, Isabella Rodriguez is planning on TOKEN LIMIT EXCEEDED. In 5 min increments, list the subtasks Isabella does when Isabella is TOKEN LIMIT EXCEEDED from (total duration in minutes 1440): 1) Isabella is TOKEN LIMIT EXCEEDED TOODOOOOOO TOKEN LIMIT EXCEEDED -==- -==- -==- TOODOOOOOO TOKEN LIMIT EXCEEDED -==- -==- -==- Traceback (most recent call last): File "/Users/tiberio/Documents/Cesar School (Mac)/TCC/generative_agents/reverie/backend_server/reverie.py", line 468, in open_server rs.start_server(int_count) File "/Users/tiberio/Documents/Cesar School (Mac)/TCC/generative_agents/reverie/backend_server/reverie.py", line 379, in start_server next_tile, pronunciatio, description = persona.move( File "/Users/tiberio/Documents/Cesar School (Mac)/TCC/generative_agents/reverie/backend_server/persona/persona.py", line 222, in move plan = self.plan(maze, personas, new_day, retrieved) File "/Users/tiberio/Documents/Cesar School (Mac)/TCC/generative_agents/reverie/backend_server/persona/persona.py", line 148, in plan return plan(self, maze, personas, new_day, retrieved) File "/Users/tiberio/Documents/Cesar School (Mac)/TCC/generative_agents/reverie/backend_server/persona/cognitive_modules/plan.py", line 959, in plan _determine_action(persona, maze) File "/Users/tiberio/Documents/Cesar School (Mac)/TCC/generative_agents/reverie/backend_server/persona/cognitive_modules/plan.py", line 573, in _determine_action generate_task_decomp(persona, act_desp, act_dura)) File "/Users/tiberio/Documents/Cesar School (Mac)/TCC/generative_agents/reverie/backend_server/persona/cognitive_modules/plan.py", line 164, in generate_task_decomp return run_gpt_prompt_task_decomp(persona, task, duration)[0] File "/Users/tiberio/Documents/Cesar School (Mac)/TCC/generative_agents/reverie/backend_server/persona/prompt_template/run_gpt_prompt.py", line 439, in run_gpt_prompt_task_decomp output = safe_generate_response(prompt, gpt_param, 5, get_fail_safe(), File "/Users/tiberio/Documents/Cesar School (Mac)/TCC/generative_agents/reverie/backend_server/persona/prompt_template/gpt_structure.py", line 268, in safe_generate_response return func_clean_up(curr_gpt_response, prompt=prompt) File "/Users/tiberio/Documents/Cesar School (Mac)/TCC/generative_agents/reverie/backend_server/persona/prompt_template/run_gpt_prompt.py", line 378, in __func_clean_up duration = int(k[1].split(",")[0].strip()) IndexError: list index out of range Error. Enter option:
I also have the same issue, but I did not use OpenAI's large language model. Instead, I use other LLM model that is compatible with the interface style of openai. So I replaced all model names and modified the corresponding base_url, but it still seemed easy to encounter this problem. And I updated the version of OpenAI's library and replaced the corresponding old version interfaces. At this point, most of the functionality has actually been completed. The only thing is that the model I'm using does not support openai's interface(/v1/completions). So I changed "openai.Completion.create" in "gpt_structure.py" to "openai.chat.completions.create", and modified the parameter "prompt=prompt" to "messages=[{"role": "user", "content": prompt}]". The compatibility issue was resolved, so when trying to run it again, it worked fine! I hope my debugging experience can help you.
I also have the same issue, but I did not use OpenAI's large language model. Instead, I use other LLM model that is compatible with the interface style of openai. So I replaced all model names and modified the corresponding base_url, but it still seemed easy to encounter this problem. And I updated the version of OpenAI's library and replaced the corresponding old version interfaces. At this point, most of the functionality has actually been completed. The only thing is that the model I'm using does not support openai's interface(/v1/completions). So I changed "openai.Completion.create" in "gpt_structure.py" to "openai.chat.completions.create", and modified the parameter "prompt=prompt" to "messages=[{"role": "user", "content": prompt}]". The compatibility issue was resolved, so when trying to run it again, it worked fine!
when i use other llm, it comes error:
I installed it all correctly, everything seems to work, but when I try to run it, even with step count to 1, it just prints "TOKEN LIMIT EXCEEDED". I'm not sure where the problem is, this OpenAI changes something that broke the code? I have a paid API with usage tier 3, so I think it was supposed to work, I tracked the API key the code is using but it seems that It's not even calling for the GPT3, it just makes calls for the "Text-embedding-ada-002-v2", if the OpenAI API tracking is accurate it didn't make any calls for the GPT API. Anyone was able to run this recently? Anyone has any idea of what could e the cause of this error? Edit: Also, it always ends in an error before asking to enter an option again:
Today is February 13, 2023. From 00:00AM ~ 00:00AM, Isabella Rodriguez is planning on TOKEN LIMIT EXCEEDED. In 5 min increments, list the subtasks Isabella does when Isabella is TOKEN LIMIT EXCEEDED from (total duration in minutes 1440): 1) Isabella is TOKEN LIMIT EXCEEDED TOODOOOOOO TOKEN LIMIT EXCEEDED -==- -==- -==- TOODOOOOOO TOKEN LIMIT EXCEEDED -==- -==- -==- Traceback (most recent call last): File "/Users/tiberio/Documents/Cesar School (Mac)/TCC/generative_agents/reverie/backend_server/reverie.py", line 468, in open_server rs.start_server(int_count) File "/Users/tiberio/Documents/Cesar School (Mac)/TCC/generative_agents/reverie/backend_server/reverie.py", line 379, in start_server next_tile, pronunciatio, description = persona.move( File "/Users/tiberio/Documents/Cesar School (Mac)/TCC/generative_agents/reverie/backend_server/persona/persona.py", line 222, in move plan = self.plan(maze, personas, new_day, retrieved) File "/Users/tiberio/Documents/Cesar School (Mac)/TCC/generative_agents/reverie/backend_server/persona/persona.py", line 148, in plan return plan(self, maze, personas, new_day, retrieved) File "/Users/tiberio/Documents/Cesar School (Mac)/TCC/generative_agents/reverie/backend_server/persona/cognitive_modules/plan.py", line 959, in plan _determine_action(persona, maze) File "/Users/tiberio/Documents/Cesar School (Mac)/TCC/generative_agents/reverie/backend_server/persona/cognitive_modules/plan.py", line 573, in _determine_action generate_task_decomp(persona, act_desp, act_dura)) File "/Users/tiberio/Documents/Cesar School (Mac)/TCC/generative_agents/reverie/backend_server/persona/cognitive_modules/plan.py", line 164, in generate_task_decomp return run_gpt_prompt_task_decomp(persona, task, duration)[0] File "/Users/tiberio/Documents/Cesar School (Mac)/TCC/generative_agents/reverie/backend_server/persona/prompt_template/run_gpt_prompt.py", line 439, in run_gpt_prompt_task_decomp output = safe_generate_response(prompt, gpt_param, 5, get_fail_safe(), File "/Users/tiberio/Documents/Cesar School (Mac)/TCC/generative_agents/reverie/backend_server/persona/prompt_template/gpt_structure.py", line 268, in safe_generate_response return func_clean_up(curr_gpt_response, prompt=prompt) File "/Users/tiberio/Documents/Cesar School (Mac)/TCC/generative_agents/reverie/backend_server/persona/prompt_template/run_gpt_prompt.py", line 378, in __func_clean_up duration = int(k[1].split(",")[0].strip()) IndexError: list index out of range Error. Enter option:I also have the same issue, but I did not use OpenAI's large language model. Instead, I use other LLM model that is compatible with the interface style of openai. So I replaced all model names and modified the corresponding base_url, but it still seemed easy to encounter this problem. And I updated the version of OpenAI's library and replaced the corresponding old version interfaces. At this point, most of the functionality has actually been completed. The only thing is that the model I'm using does not support openai's interface(/v1/completions). So I changed "openai.Completion.create" in "gpt_structure.py" to "openai.chat.completions.create", and modified the parameter "prompt=prompt" to "messages=[{"role": "user", "content": prompt}]". The compatibility issue was resolved, so when trying to run it again, it worked fine! I hope my debugging experience can help you.
I tried your method but it didnt work so i looked up the openai document. For people using gpt-3.5-turbo, they just need to change "GPT_request" in "gpt_structure.py", modify the parameter "prompt=prompt" to "messages=[{"role": "user", "content": prompt}]", and dont need to change "openai.Completion.create" to "openai.chat.completions.create". But still very thanks!!!
I changed all "text-davinci-00x" in ""run_gpt_prompt.py"" to "gpt-3.5- turbo-instruct" solved the problem.
I'm having the same problem... None of these steps worked...Anyone has more solutions?
As I dig down the error is printed in gpt_structure.py line 221-224 saying "TOKEN LIMIT EXCEEDED" but if you take a look closely this is just a catch all exception, so for the sake of our sanity, i'd suggest changing it to
except Exception as e:
print(f"Error occurred: {str(e)}")
return f"ERROR: {str(e)}"
Which lead me to realize that the original authors wrote it for davinci and it's all deprecated by openai now. Since gpt3.5 is also on it's way to being deprecated, y'all should do a find and replace all to gpt-4o-mini in run_gpt_prompt.py
However, don't forget to change the API call in gpt_structure.py into this below because gpt-4o-mini is a ChatCompletion model
response = openai.ChatCompletion.create(
model=gpt_parameter["engine"],
messages=[{"role": "system", "content": prompt}],
temperature=gpt_parameter["temperature"],
max_tokens=gpt_parameter["max_tokens"],
top_p=gpt_parameter["top_p"],
frequency_penalty=gpt_parameter["frequency_penalty"],
presence_penalty=gpt_parameter["presence_penalty"],
stream=gpt_parameter["stream"],
stop=gpt_parameter["stop"],)
This solved my issue for now 👍🏼
As I dig down the error is printed in
gpt_structure.pyline 221-224 saying "TOKEN LIMIT EXCEEDED" but if you take a look closely this is just a catch all exception, so for the sake of our sanity, i'd suggest changing it toexcept Exception as e: print(f"Error occurred: {str(e)}") return f"ERROR: {str(e)}"Which lead me to realize that the original authors wrote it for davinci and it's all deprecated by openai now. Since gpt3.5 is also on it's way to being deprecated, y'all should do a find and replace all to
gpt-4o-miniinrun_gpt_prompt.pyHowever, don't forget to change the API call in
gpt_structure.pyinto this below becausegpt-4o-miniis a ChatCompletion modelresponse = openai.ChatCompletion.create( model=gpt_parameter["engine"], messages=[{"role": "system", "content": prompt}], temperature=gpt_parameter["temperature"], max_tokens=gpt_parameter["max_tokens"], top_p=gpt_parameter["top_p"], frequency_penalty=gpt_parameter["frequency_penalty"], presence_penalty=gpt_parameter["presence_penalty"], stream=gpt_parameter["stream"], stop=gpt_parameter["stop"],)This solved my issue for now 👍🏼
you give such an useful solution!! The error is caused by "ERROR: text",not"token limit exceed", which spend such a long time for me to realize this point. But a little different thing for me is to change the response.choices[0].text to response["choices"][0]["message"]["content"], can solve my problem.