generative_agents icon indicating copy to clipboard operation
generative_agents copied to clipboard

run_gpt_prompt_act_obj_desc() returns error "NoneType" object is not subscriptable.

Open certainforest opened this issue 2 years ago β€’ 3 comments

Hi everyone,

I'm running into a NoneType error when I encounter the function run_gpt_prompt_act_obj_desc() - it looks like this occurs, generally, when outputting a json response to a prompt.

Based on the error trace, I think the issue is with how the prompt completions are happening...I'm using llama 13b chat, and I can see the output is not in json format.

image

How is everyone handling this? And/or how are you ensuring output consistency when running simulations with gpt (v. another model)

Would love thoughts! πŸ˜„

certainforest avatar Aug 23 '23 18:08 certainforest

I am also running into this issue πŸ˜… I presume you also used llama_cpp_python to act as a translation layer for OpenAI's API? I think I'll likely try and figure out some code to extract the model's message and manually wrap it in a json format. The llama models (at least the smaller parameter ones) clearly do not have the context for JSON formatting.

GNS FUNCTION: <generate_act_obj_desc> asdhfapsh8p9hfaiafdsi;ldfj as DEBUG 6 CHAT GPT PROMPT """ Task: We want to understand the state of an object that is being used by someone.

Let's think step by step. We want to know about bed's state. Step 1. Isabella Rodriguez is at/using the sleeping. Step 2. Describe the bed's state: bed is """ Output the response to the prompt above in json. The output should ONLY contain the phrase that should go in . Example output json: {"output": "being fixed"} Traceback (most recent call last): File "...\generative_agents-main\reverie\backend_server\reverie.py", line 468, in open_server rs.start_server(int_count) File "...\generative_agents-main\reverie\backend_server\reverie.py", line 379, in start_server next_tile, pronunciatio, description = persona.move( ^^^^^^^^^^^^^ File "...\generative_agents-main\reverie\backend_server\persona\persona.py", line 222, in move plan = self.plan(maze, personas, new_day, retrieved) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "...\generative_agents-main\reverie\backend_server\persona\persona.py", line 148, in plan return plan(self, maze, personas, new_day, retrieved) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "...\generative_agents-main\reverie\backend_server\persona\cognitive_modules\plan.py", line 959, in plan _determine_action(persona, maze) File "...\generative_agents-main\reverie\backend_server\persona\cognitive_modules\plan.py", line 635, in _determine_action act_obj_desp = generate_act_obj_desc(act_game_object, act_desp, persona) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "...\generative_agents-main\reverie\backend_server\persona\cognitive_modules\plan.py", line 269, in generate_act_obj_desc return run_gpt_prompt_act_obj_desc(act_game_object, act_desp, persona)[0] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^ TypeError: 'NoneType' object is not subscriptable

ElliottDyson avatar Aug 28 '23 01:08 ElliottDyson

Hi everyone,

I'm running into a NoneType error when I encounter the function run_gpt_prompt_act_obj_desc() - it looks like this occurs, generally, when outputting a json response to a prompt.

Based on the error trace, I think the issue is with how the prompt completions are happening...I'm using llama 13b chat, and I can see the output is not in json format.

image How is everyone handling this? And/or how are you ensuring output consistency when running simulations with gpt (v. another model)

Would love thoughts! πŸ˜„

It's a shame because I found a model that is providing amazing answers for its size, in every way except for JSON formatting. I figured I might as well create a discord server for collaboration on this work in general, since I think it has some very interesting use cases if properly transported. Feel free to join if you'd like to work on solving this problem together: https://discord.gg/GefGyX4qT6

ElliottDyson avatar Aug 28 '23 01:08 ElliottDyson

probably coused by useing models that not gpt.I don't konw how to solve it,but i try to skip it when it fail. in reverie\backend_server\persona\prompt_template\run_gpt_prompt.py line1003-1016

print ("asdhfapsh8p9hfaiafdsi;ldfj as DEBUG 6") ########
  gpt_param = {"engine": "text-davinci-002", "max_tokens": 150, 
               "temperature": 0, "top_p": 1, "stream": False,
               "frequency_penalty": 0, "presence_penalty": 0, "stop": None}
  prompt_template = "persona/prompt_template/v3_ChatGPT/generate_obj_event_v1.txt" ########
  prompt_input = create_prompt_input(act_game_object, act_desp, persona)  ########
  prompt = generate_prompt(prompt_input, prompt_template)
  example_output = "being fixed" ########
  special_instruction = "The output should ONLY contain the phrase that should go in <fill in>." ########
  fail_safe = get_fail_safe(act_game_object) ########
  output = ChatGPT_safe_generate_response(prompt, example_output, special_instruction, 3, fail_safe,
                                          __chat_func_validate, __chat_func_clean_up, True)
  if output != False: 
    return output, [output, prompt, gpt_param, prompt_input, fail_safe]

add this in the end:return fail_safe, [output, prompt, gpt_param, prompt_input, fail_safe]

Lisiyuan233 avatar Aug 04 '25 14:08 Lisiyuan233