aistudio-copilot-sample
aistudio-copilot-sample copied to clipboard
Connection 'AzureAISearch' required for flow 'copilot_promptflow' is not found.
Just followed the readme. Created the search index. Made sure the .env contains keys, etc related to azure search. Also made sure the yaml in contains the search index:
- name: retrieve_documentation type: python source: type: code path: retrieve_documentation.py inputs: search: AzureAISearch question: ${inputs.question} index_name: product-info embedding: ${question_embedding.output} However, running the command for prompt flow "python src/run.py --implementation promptflow --question "what is the waterproof rating of the tent I just ordered?"" I kept receiving the following error on both my local machine and code space: python src/run.py --implementation promptflow --question "what is the waterproof rating of the tent I just ordered?" Traceback (most recent call last): File "/usr/local/lib/python3.10/site-packages/promptflow/_sdk/_utils.py", line 815, in get_local_connections_from_executable conn = client.connections.get(name=n, with_secrets=True) File "/usr/local/lib/python3.10/site-packages/promptflow/_telemetry/activity.py", line 138, in wrapper return f(self, *args, **kwargs) File "/usr/local/lib/python3.10/site-packages/promptflow/_sdk/operations/_connection_operations.py", line 54, in get orm_connection = ORMConnection.get(name, raise_error) File "/usr/local/lib/python3.10/site-packages/promptflow/_sdk/_orm/retry.py", line 43, in f_retry return f(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/promptflow/_sdk/_orm/connection.py", line 52, in get raise ConnectionNotFoundError(f"Connection {name!r} is not found.") promptflow._sdk._errors.ConnectionNotFoundError: Connection 'AzureAISearch' is not found.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/workspaces/aistudio-copilot-sample/src/run.py", line 285, in
Thanks for reporting! This is a regression that was introduced in prompt flow, we are currently working on a fix and will update this issue when we have it.