run_gpt_prompt_act_obj_desc() returns error "NoneType" object is not subscriptable.
Hi everyone,
I'm running into a NoneType error when I encounter the function run_gpt_prompt_act_obj_desc() - it looks like this occurs, generally, when outputting a json response to a prompt.
Based on the error trace, I think the issue is with how the prompt completions are happening...I'm using llama 13b chat, and I can see the output is not in json format.
How is everyone handling this? And/or how are you ensuring output consistency when running simulations with gpt (v. another model)
Would love thoughts! π
I am also running into this issue π I presume you also used llama_cpp_python to act as a translation layer for OpenAI's API? I think I'll likely try and figure out some code to extract the model's message and manually wrap it in a json format. The llama models (at least the smaller parameter ones) clearly do not have the context for JSON formatting.
GNS FUNCTION: <generate_act_obj_desc> asdhfapsh8p9hfaiafdsi;ldfj as DEBUG 6 CHAT GPT PROMPT """ Task: We want to understand the state of an object that is being used by someone.
Let's think step by step.
We want to know about bed's state.
Step 1. Isabella Rodriguez is at/using the sleeping.
Step 2. Describe the bed's state: bed is
"""
Output the response to the prompt above in json. The output should ONLY contain the phrase that should go in
Hi everyone,
I'm running into a NoneType error when I encounter the function run_gpt_prompt_act_obj_desc() - it looks like this occurs, generally, when outputting a json response to a prompt.
Based on the error trace, I think the issue is with how the prompt completions are happening...I'm using llama 13b chat, and I can see the output is not in json format.
How is everyone handling this? And/or how are you ensuring output consistency when running simulations with gpt (v. another model)
Would love thoughts! π
It's a shame because I found a model that is providing amazing answers for its size, in every way except for JSON formatting. I figured I might as well create a discord server for collaboration on this work in general, since I think it has some very interesting use cases if properly transported. Feel free to join if you'd like to work on solving this problem together: https://discord.gg/GefGyX4qT6
probably coused by useing models that not gpt.I don't konw how to solve it,but i try to skip it when it fail. in reverie\backend_server\persona\prompt_template\run_gpt_prompt.py line1003-1016
print ("asdhfapsh8p9hfaiafdsi;ldfj as DEBUG 6") ########
gpt_param = {"engine": "text-davinci-002", "max_tokens": 150,
"temperature": 0, "top_p": 1, "stream": False,
"frequency_penalty": 0, "presence_penalty": 0, "stop": None}
prompt_template = "persona/prompt_template/v3_ChatGPT/generate_obj_event_v1.txt" ########
prompt_input = create_prompt_input(act_game_object, act_desp, persona) ########
prompt = generate_prompt(prompt_input, prompt_template)
example_output = "being fixed" ########
special_instruction = "The output should ONLY contain the phrase that should go in <fill in>." ########
fail_safe = get_fail_safe(act_game_object) ########
output = ChatGPT_safe_generate_response(prompt, example_output, special_instruction, 3, fail_safe,
__chat_func_validate, __chat_func_clean_up, True)
if output != False:
return output, [output, prompt, gpt_param, prompt_input, fail_safe]
add this in the endοΌreturn fail_safe, [output, prompt, gpt_param, prompt_input, fail_safe]
How is everyone handling this? And/or how are you ensuring output consistency when running simulations with gpt (v. another model)