NeMo-Guardrails icon indicating copy to clipboard operation
NeMo-Guardrails copied to clipboard

Error while execution generate_user_intent: 'NoneType' object has no attribute 'agenerate_prompt'

Open DrmedAllel opened this issue 1 year ago • 1 comments

Tried to connect NeMo with Microsoft Azure like in this issue explained.

My config.yml is like this:

type: main engine: azure model: gpt-4 parameters: azure_endpoint: https://*******/ api_version: 2023-07-01-preview deployment_name: gpt-4-0613-preview api_key: ************************

The Python Code I use is looking like this:

`import os from nemoguardrails import LLMRails, RailsConfig

os.environ["OPENAI_API_KEY"] = "****"

def read_file(path): with open(path, "r") as f: return f.read()

colang_content = read_file("T2000/config/topics.co")

yaml_content = read_file("T2000/config/config.yml")

initialize rails config

config = RailsConfig.from_content( yaml_content=yaml_content, colang_content=colang_content )

create rails

rails = LLMRails(config)

options = { "output_vars": ["triggered_input_rail"], "log": { "activated_rails": True } }

def generate(prompt): return rails.generate(prompt, options=options)

print("\033c") while True: prompt = input("Input: ") print(generate(prompt)) print("\n") `

Can someone help me? The script keeps running but I get the following error printed as a string to the console:

Input: hallo Parameter temperature does not exist for NoneType Error while execution generate_user_intent: 'NoneType' object has no attribute 'agenerate_prompt' response="I'm sorry, an internal error has occurred." llm_output=None output_data={'triggered_input_rail': None} log=GenerationLog(activated_rails=[ActivatedRail(type='dialog', name='generate user intent', decisions=['execute generate_user_intent'], executed_actions=[ExecutedAction(action_name='generate_user_intent', action_params={}, return_value=None, llm_calls=[], started_at=1718713098.935654, finished_at=1718713098.947871, duration=0.012217044830322266)], stop=False, additional_info=None, started_at=1718713098.9356449, finished_at=1718713098.947905, duration=0.012260198593139648)], stats=GenerationStats(input_rails_duration=None, dialog_rails_duration=0.012260198593139648, generation_rails_duration=None, output_rails_duration=None, total_duration=0.014315128326416016, llm_calls_duration=0, llm_calls_count=0, llm_calls_total_prompt_tokens=0, llm_calls_total_completion_tokens=0, llm_calls_total_tokens=0), llm_calls=None, internal_events=None, colang_history=None) state=None

DrmedAllel avatar Jun 18 '24 12:06 DrmedAllel

Hi @DrmedAllel, are you still facing this issue? This might help in resolving it.

Pouyanpi avatar Aug 16 '24 14:08 Pouyanpi