graphrag icon indicating copy to clipboard operation
graphrag copied to clipboard

Get Resource not Found 404 Error during index creation

Open leodatavinci opened this issue 1 year ago • 1 comments

After running python -m graphrag.index --root ./ragtest

I get a 404 resource not found error, however I m not sure that the reason for this would be, as i have tested the endpoing and API Key that I provide. Im wondering if it there is related to the chat completions endpoint and the API version.

I get the following error in the log

{ "type": "error", "data": "Error Invoking LLM", "stack": "Traceback (most recent call last):\n File "C:\Users\\miniconda3\envs\graphrag\lib\site-packages\graphrag\llm\base\base_llm.py", line 53, in _invoke\n output = await self._execute_llm(input, **kwargs)\n File "C:\Users\\miniconda3\envs\graphrag\lib\site-packages\graphrag\llm\openai\openai_chat_llm.py", line 55, in _execute_llm\n completion = await self.client.chat.completions.create(\n File "C:\Users\\miniconda3\envs\graphrag\lib\site-packages\openai\resources\chat\completions.py", line 1289, in create\n return await self._post(\n File "C:\Users\\miniconda3\envs\graphrag\lib\site-packages\openai\_base_client.py", line 1816, in post\n return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)\n File "C:\Users\\miniconda3\envs\graphrag\lib\site-packages\openai\_base_client.py", line 1514, in request\n return await self._request(\n File "C:\Users\\miniconda3\envs\graphrag\lib\site-packages\openai\_base_client.py", line 1610, in _request\n raise self._make_status_error_from_response(err.response) from None\nopenai.NotFoundError: Error code: 404 - {'error': {'code': '404', 'message': 'Resource not found'}}\n", "source": "Error code: 404 - {'error': {'code': '404', 'message': 'Resource not found'}}" }

In my setting file i have the following configured: ` encoding_model: cl100k_base skip_workflows: [] llm: api_key: XXXX type: azure_openai_chat # or azure_openai_chat model: gpt-4o model_supports_json: true # recommended if this is available for your model.

max_tokens: 4000

request_timeout: 180.0

api_base: https://ai-xxxx.openai.azure.com #api_version: 2024-02-15-preview

organization: <organization_id>

deployment_name: gpt-4o

tokens_per_minute: 150_000 # set a leaky bucket throttle

requests_per_minute: 10_000 # set a leaky bucket throttle

max_retries: 10

max_retry_wait: 10.0

sleep_on_rate_limit_recommendation: true # whether to sleep when azure suggests wait-times

concurrent_requests: 25 # the number of parallel inflight requests that may be made

parallelization stagger: 0.3

num_threads: 50 # the number of threads to use for parallel processing

async_mode: threaded # or asyncio

embeddings

parallelization: override the global parallelization settings for embeddings

async_mode: threaded # or asyncio llm: api_key: XXXX type: azure_openai_embedding # or azure_openai_embedding model: text-embedding-ada-002 api_base: https://ai-xxxx.openai.azure.com #api_version: 2024-02-15-preview # organization: <organization_id> deployment_name: text-embedding-ada-002 # tokens_per_minute: 150_000 # set a leaky bucket throttle # requests_per_minute: 10_000 # set a leaky bucket throttle # max_retries: 10 # max_retry_wait: 10.0 # sleep_on_rate_limit_recommendation: true # whether to sleep when azure suggests wait-times # concurrent_requests: 25 # the number of parallel inflight requests that may be made # batch_size: 16 # the number of documents to send in a single request # batch_max_tokens: 8191 # the maximum number of tokens to send in a single request # target: required # or optional

`

leodatavinci avatar Jul 07 '24 10:07 leodatavinci

A 404 error indicates that you are using the wrong api_version. I used gpt4-o in my test as well and I confirm that I succeeded with 2024-02-15-preview. You should uncomment the api_version lines for both llm.api_version and embeddings.llm.api_version.

eyast avatar Jul 07 '24 23:07 eyast

Hi @leodatavinci As @eyast mentions, commenting out the api_version causes the engine to use a default one which may be already removed. Please try again by uncommenting both api_version options and setting it to a proper one. In my case, I use the same mentioned on the reply above

AlonsoGuevara avatar Jul 09 '24 20:07 AlonsoGuevara

I unchecked the api_version comment and set 2024-02-15-preview, and the problem still persists...

jiewenshuaishuai avatar Jul 18 '24 08:07 jiewenshuaishuai