graphrag icon indicating copy to clipboard operation
graphrag copied to clipboard

[Bug]: openai.NotFoundError: Error code: 404 - {'error': {'code': '404', 'message': 'Resource not found'}} when running with Azure OpenAI

Open lsukharn opened this issue 8 months ago • 1 comments

Do you need to file an issue?

  • [x] I have searched the existing issues and this bug is not already filed.
  • [x] My model is hosted on OpenAI or Azure. If not, please look at the "model providers" issue and don't file a new one here.
  • [x] I believe this is a legitimate bug, not just a question. If this is a question, please use the Discussions area.

Describe the bug

While running global search script Azure openAI models cannot be called with Resource not found error:

... see logs for the entire trace

Exception in _map_response_single_batch ... see logs return await self._request( ^^^^^^^^^^^^^^^^^^^^ File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\openai_base_client.py", line 1562, in _request raise self._make_status_error_from_response(err.response) from None openai.NotFoundError: Error code: 404 - {'error': {'code': '404', 'message': 'Resource not found'}} Exception in _map_response_single_batch Traceback (most recent call last): File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\graphrag\query\structured_search\global_search\search.py", line 223, in _map_response_single_batch model_response = await self.model.achat( ^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\graphrag\language_model\providers\fnllm\models.py", line 83, in achat response = await self.model(prompt, history=history, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\fnllm\openai\llm\openai_chat_llm.py", line 94, in call return await self._text_chat_llm(prompt, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\fnllm\openai\services\openai_tools_parsing.py", line 130, in call return await self._delegate(prompt, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\fnllm\base\base_llm.py", line 144, in call return await self._decorated_target(prompt, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\fnllm\base\services\json.py", line 77, in invoke return await this.invoke_json(delegate, prompt, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\fnllm\base\services\json.py", line 96, in invoke_json return await self.try_receive_json(delegate, prompt, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\fnllm\base\services\json.py", line 162, in try_receive_json result = await delegate(prompt, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\fnllm\base\services\rate_limiter.py", line 75, in invoke result = await delegate(prompt, **args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\fnllm\base\base_llm.py", line 126, in _decorator_target output = await self._execute_llm(prompt, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\fnllm\openai\llm\openai_text_chat_llm.py", line 166, in _execute_llm completion = await self._client.chat.completions.create( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\openai\resources\chat\completions\completions.py", line 1927, in create return await self._post( ^^^^^^^^^^^^^^^^^ File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\openai_base_client.py", line 1767, in post return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\openai_base_client.py", line 1461, in request return await self._request( ^^^^^^^^^^^^^^^^^^^^ File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\openai_base_client.py", line 1562, in _request raise self._make_status_error_from_response(err.response) from None openai.NotFoundError: Error code: 404 - {'error': {'code': '404', 'message': 'Resource not found'}} Warning: All map responses have score 0 (i.e., no relevant information found from the dataset), returning a canned 'I do not know' answer. You can try enabling allow_general_knowledge to encourage the LLM to incorporate relevant general knowledge, at the risk of increasing hallucinations.

Here is the script. I skipped the imports part.

llm_model = os.environ["GRAPHRAG_LLM_MODEL"]
api_base = os.environ["API_BASE_TEST"]
deployment_name = os.environ["GRAPHRAG_LLM_MODEL_DEPLOYMENT_NAME"]

config = LanguageModelConfig(
    api_key=api_key,
    type=ModelType.AzureOpenAIChat,
    api_base=api_base,
    api_version='2025-01-01-preview',
    model=llm_model,
    deployment_name=deployment_name,
    max_retries=20,
)
model = ModelManager().get_or_create_chat_model(
    name="global_search",
    model_type=ModelType.AzureOpenAIChat,
    config=config,
)

token_encoder = tiktoken.encoding_for_model(llm_model)```

Runs OK

# parquet files generated from indexing pipeline
OUTPUT_DIR = "./graphrag_project/output"
COMMUNITY_REPORT_TABLE = "community_reports"
ENTITY_TABLE = "entities"
COMMUNITY_TABLE = "communities"

COMMUNITY_LEVEL = 2

community_df = pd.read_parquet(f"{OUTPUT_DIR}/{COMMUNITY_TABLE}.parquet")
entity_df = pd.read_parquet(f"{OUTPUT_DIR}/{ENTITY_TABLE}.parquet")
report_df = pd.read_parquet(f"{OUTPUT_DIR}/{COMMUNITY_REPORT_TABLE}.parquet")

communities = read_indexer_communities(community_df, report_df)
reports = read_indexer_reports(
    report_df,
    community_df,
    community_level=COMMUNITY_LEVEL,
    dynamic_community_selection=True,
)
entities = read_indexer_entities(
    entity_df, community_df, community_level=COMMUNITY_LEVEL
)

print(f"Total report count: {len(report_df)}")
print(
    f"Report count after filtering by community level {COMMUNITY_LEVEL}: {len(reports)}"
)

report_df.head()

Runs OK

context_builder = GlobalCommunityContext(
    community_reports=reports,
    communities=communities,
    entities=entities,  # default to None if you don't want to use community weights for ranking
    token_encoder=token_encoder,
)
context_builder_params = {
    "use_community_summary": False,  # False means using full community reports. True means using community short summaries.
    "shuffle_data": True,
    "include_community_rank": True,
    "min_community_rank": 0,
    "community_rank_name": "rank",
    "include_community_weight": True,
    "community_weight_name": "occurrence weight",
    "normalize_community_weight": True,
    "max_tokens": 12_000,  # change this based on the token limit you have on your model (if you are using a model with 8k limit, a good setting could be 5000)
    "context_name": "Reports",
}

map_llm_params = {
    "max_tokens": 1000,
    "temperature": 0.0,
    "response_format": {"type": "json_object"},
}

reduce_llm_params = {
    "max_tokens": 2000,  # change this based on the token limit you have on your model (if you are using a model with 8k limit, a good setting could be 1000-1500)
    "temperature": 0.0,
}

Runs OK

search_engine = GlobalSearch(
    model=model,
    context_builder=context_builder,
    token_encoder=token_encoder,
    max_data_tokens=12_000,  # change this based on the token limit you have on your model (if you are using a model with 8k limit, a good setting could be 5000)
    map_llm_params=map_llm_params,
    reduce_llm_params=reduce_llm_params,
    allow_general_knowledge=False,  # set this to True will add instruction to encourage the LLM to incorporate general knowledge in the response, which may increase hallucinations, but could be useful in some use cases.
    json_mode=True,  # set this to False if your LLM model does not support JSON mode.
    context_builder_params=context_builder_params,
    concurrent_coroutines=32,
    response_type="multiple paragraphs",  # free form text describing the response type and format, can be anything, e.g. prioritized list, single paragraph, multiple paragraphs, multiple-page report
)

Runs OK

result = await search_engine.search("How many ghosts visited Scrooge?")

print(result.response)

Fails Here with the error I shared

Steps to reproduce

  1. Run indexing of Christmas Carrol on disk with graphrag index --root .\graphrag_project. Settings were generated by graphrag init and I updated only the API keys, deployment and base_url for my Azure ai hub.
  2. Use this jupyter notebook script: https://github.com/microsoft/graphrag/blob/main/docs/examples_notebooks/global_search.ipynb and update it to access Azure openAI with:
api_key = os.environ["GRAPHRAG_API_KEY"]
llm_model = os.environ["GRAPHRAG_LLM_MODEL"]
api_base = os.environ["API_BASE_TEST"]
deployment_name = os.environ["GRAPHRAG_LLM_MODEL_DEPLOYMENT_NAME"]

config = LanguageModelConfig(
    api_key=api_key,
    type=ModelType.AzureOpenAIChat,
    api_base=api_base,
    api_version='2025-01-01-preview',
    model=llm_model,
    deployment_name=deployment_name,
    max_retries=20,
)
model = ModelManager().get_or_create_chat_model(
    name="global_search",
    model_type=ModelType.AzureOpenAIChat,
    config=config,
)
  1. Run everything till the last cell to see the error presented above

Expected Behavior

In the end of the script I need a response from the LLM as it used to work with the previous version of global search notebook.

GraphRAG Config Used

# Paste your config here

Logs and screenshots

Exception in _map_response_single_batch Traceback (most recent call last): File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\graphrag\query\structured_search\global_search\search.py", line 223, in _map_response_single_batch model_response = await self.model.achat( ^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\graphrag\language_model\providers\fnllm\models.py", line 83, in achat response = await self.model(prompt, history=history, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\fnllm\openai\llm\openai_chat_llm.py", line 94, in call return await self._text_chat_llm(prompt, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\fnllm\openai\services\openai_tools_parsing.py", line 130, in call return await self._delegate(prompt, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\fnllm\base\base_llm.py", line 144, in call return await self._decorated_target(prompt, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\fnllm\base\services\json.py", line 77, in invoke return await this.invoke_json(delegate, prompt, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\fnllm\base\services\json.py", line 96, in invoke_json return await self.try_receive_json(delegate, prompt, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\fnllm\base\services\json.py", line 162, in try_receive_json result = await delegate(prompt, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\fnllm\base\services\rate_limiter.py", line 75, in invoke result = await delegate(prompt, **args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\fnllm\base\base_llm.py", line 126, in _decorator_target output = await self._execute_llm(prompt, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\fnllm\openai\llm\openai_text_chat_llm.py", line 166, in _execute_llm completion = await self._client.chat.completions.create( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\openai\resources\chat\completions\completions.py", line 1927, in create return await self._post( ^^^^^^^^^^^^^^^^^ File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\openai_base_client.py", line 1767, in post return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\openai_base_client.py", line 1461, in request return await self._request( ^^^^^^^^^^^^^^^^^^^^ File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\openai_base_client.py", line 1562, in _request raise self._make_status_error_from_response(err.response) from None openai.NotFoundError: Error code: 404 - {'error': {'code': '404', 'message': 'Resource not found'}} Exception in _map_response_single_batch Traceback (most recent call last): File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\graphrag\query\structured_search\global_search\search.py", line 223, in _map_response_single_batch model_response = await self.model.achat( ^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\graphrag\language_model\providers\fnllm\models.py", line 83, in achat response = await self.model(prompt, history=history, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\fnllm\openai\llm\openai_chat_llm.py", line 94, in call return await self._text_chat_llm(prompt, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\fnllm\openai\services\openai_tools_parsing.py", line 130, in call return await self._delegate(prompt, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\fnllm\base\base_llm.py", line 144, in call return await self._decorated_target(prompt, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\fnllm\base\services\json.py", line 77, in invoke return await this.invoke_json(delegate, prompt, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\fnllm\base\services\json.py", line 96, in invoke_json return await self.try_receive_json(delegate, prompt, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\fnllm\base\services\json.py", line 162, in try_receive_json result = await delegate(prompt, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\fnllm\base\services\rate_limiter.py", line 75, in invoke result = await delegate(prompt, **args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\fnllm\base\base_llm.py", line 126, in _decorator_target output = await self._execute_llm(prompt, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\fnllm\openai\llm\openai_text_chat_llm.py", line 166, in _execute_llm completion = await self._client.chat.completions.create( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\openai\resources\chat\completions\completions.py", line 1927, in create return await self._post( ^^^^^^^^^^^^^^^^^ File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\openai_base_client.py", line 1767, in post return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\openai_base_client.py", line 1461, in request return await self._request( ^^^^^^^^^^^^^^^^^^^^ File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\openai_base_client.py", line 1562, in _request raise self._make_status_error_from_response(err.response) from None openai.NotFoundError: Error code: 404 - {'error': {'code': '404', 'message': 'Resource not found'}} Exception in _map_response_single_batch Traceback (most recent call last): File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\graphrag\query\structured_search\global_search\search.py", line 223, in _map_response_single_batch model_response = await self.model.achat( ^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\graphrag\language_model\providers\fnllm\models.py", line 83, in achat response = await self.model(prompt, history=history, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\fnllm\openai\llm\openai_chat_llm.py", line 94, in call return await self._text_chat_llm(prompt, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\fnllm\openai\services\openai_tools_parsing.py", line 130, in call return await self._delegate(prompt, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\fnllm\base\base_llm.py", line 144, in call return await self._decorated_target(prompt, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\fnllm\base\services\json.py", line 77, in invoke return await this.invoke_json(delegate, prompt, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\fnllm\base\services\json.py", line 96, in invoke_json return await self.try_receive_json(delegate, prompt, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\fnllm\base\services\json.py", line 162, in try_receive_json result = await delegate(prompt, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\fnllm\base\services\rate_limiter.py", line 75, in invoke result = await delegate(prompt, **args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\fnllm\base\base_llm.py", line 126, in _decorator_target output = await self._execute_llm(prompt, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\fnllm\openai\llm\openai_text_chat_llm.py", line 166, in _execute_llm completion = await self._client.chat.completions.create( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\openai\resources\chat\completions\completions.py", line 1927, in create return await self._post( ^^^^^^^^^^^^^^^^^ File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\openai_base_client.py", line 1767, in post return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\openai_base_client.py", line 1461, in request return await self._request( ^^^^^^^^^^^^^^^^^^^^ File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\openai_base_client.py", line 1562, in _request raise self._make_status_error_from_response(err.response) from None openai.NotFoundError: Error code: 404 - {'error': {'code': '404', 'message': 'Resource not found'}} Exception in _map_response_single_batch Traceback (most recent call last): File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\graphrag\query\structured_search\global_search\search.py", line 223, in _map_response_single_batch model_response = await self.model.achat( ^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\graphrag\language_model\providers\fnllm\models.py", line 83, in achat response = await self.model(prompt, history=history, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\fnllm\openai\llm\openai_chat_llm.py", line 94, in call return await self._text_chat_llm(prompt, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\fnllm\openai\services\openai_tools_parsing.py", line 130, in call return await self._delegate(prompt, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\fnllm\base\base_llm.py", line 144, in call return await self._decorated_target(prompt, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\fnllm\base\services\json.py", line 77, in invoke return await this.invoke_json(delegate, prompt, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\fnllm\base\services\json.py", line 96, in invoke_json return await self.try_receive_json(delegate, prompt, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\fnllm\base\services\json.py", line 162, in try_receive_json result = await delegate(prompt, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\fnllm\base\services\rate_limiter.py", line 75, in invoke result = await delegate(prompt, **args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\fnllm\base\base_llm.py", line 126, in _decorator_target output = await self._execute_llm(prompt, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\fnllm\openai\llm\openai_text_chat_llm.py", line 166, in _execute_llm completion = await self._client.chat.completions.create( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\openai\resources\chat\completions\completions.py", line 1927, in create return await self._post( ^^^^^^^^^^^^^^^^^ File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\openai_base_client.py", line 1767, in post return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\openai_base_client.py", line 1461, in request return await self._request( ^^^^^^^^^^^^^^^^^^^^ File "C:\Users\username\Desktop\graphrag_repo.venv\Lib\site-packages\openai_base_client.py", line 1562, in _request raise self._make_status_error_from_response(err.response) from None openai.NotFoundError: Error code: 404 - {'error': {'code': '404', 'message': 'Resource not found'}} Warning: All map responses have score 0 (i.e., no relevant information found from the dataset), returning a canned 'I do not know' answer. You can try enabling allow_general_knowledge to encourage the LLM to incorporate relevant general knowledge, at the risk of increasing hallucinations.

Additional Information

  • GraphRAG Version: 2.1.0
  • Operating System: Windows 11
  • Python Version: 3.12
  • Related Issues:

lsukharn avatar Apr 04 '25 19:04 lsukharn

@lsukharn Based on my previous experiences, it seems this issue may be related to the Azure OpenAI resource. Not all Azure OpenAI model deployments support all features; this depends on the version, region, and a few other factors. Could you please check your API version and let me know if you continue to face this issue?

For example, in our company, the API version we were using did not support structured output responses using Pydantic Models. As a workaround, I had to either format the responses in the functions or utilize libraries like instruct.

Krishna2709 avatar Apr 28 '25 11:04 Krishna2709