[Bug]: OpenAI Authentification error (openai.AuthenticationError: Error code: 401 - {'statusCode': 401, 'message': 'Unauthorized. Access token is missing, invalid, audience is incorrect (https://cognitiveservices.azure.com), or have expired.'})
Bug Description
While trying to execute a basic script using the Azure OpenAI models, I encounter this error:
openai.AuthenticationError: Error code: 401 - {'statusCode': 401, 'message': 'Unauthorized. Access token is missing, invalid, audience is incorrect (https://cognitiveservices.azure.com), or have expired.'}
I verified that the api keys utilized are right of course. The code that raises the error is the following:
import os
from llama_index.llms.azure_openai import AzureOpenAI
import dotenv
dotenv.load_dotenv()
llm = AzureOpenAI(
engine=os.environ["AZURE_OPENAI_LLM_ENGINE"],
model=os.environ["AZURE_OPENAI_LLM_MODEL"],
temperature=0.0,
api_key=os.environ["AZURE_OPENAI_API_KEY"],
azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"],
api_version=os.environ["AZURE_OPENAI_VERSION"]
)
response = llm.complete("The sky is a beautiful blue and")
print(response)
I can't seem to figure out what is wrong, I tried with different llama-index versions without any luck. Does someone know what is causing the error ?
Version
0.10.19
Steps to Reproduce
Executre the following code in a python file or it could be in a notebook.
import os
from llama_index.llms.azure_openai import AzureOpenAI
import dotenv
dotenv.load_dotenv()
llm = AzureOpenAI(
engine=os.environ["AZURE_OPENAI_LLM_ENGINE"],
model=os.environ["AZURE_OPENAI_LLM_MODEL"],
temperature=0.0,
api_key=os.environ["AZURE_OPENAI_API_KEY"],
azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"],
api_version=os.environ["AZURE_OPENAI_VERSION"]
)
response = llm.complete("The sky is a beautiful blue and")
print(response)
Relevant Logs/Tracbacks
File "C:\Users\MaximeLanglet\Desktop\QAKnowledge\QnAtool\src\rag.py", line 52, in <module>
response = llm.complete("The sky is a beautiful blue and")
File "C:\Users\MaximeLanglet\miniconda3\lib\site-packages\llama_index\core\llms\callbacks.py", line 219, in wrapped_llm_predict
f_return_val = f(_self, *args, **kwargs)
File "C:\Users\MaximeLanglet\miniconda3\lib\site-packages\llama_index\llms\openai\base.py", line 257, in complete
return complete_fn(prompt, **kwargs)
File "C:\Users\MaximeLanglet\miniconda3\lib\site-packages\llama_index\core\base\llms\generic_utils.py", line 173, in wrapper
chat_response = func(messages, **kwargs)
File "C:\Users\MaximeLanglet\miniconda3\lib\site-packages\llama_index\llms\openai\base.py", line 296, in _chat
File "C:\Users\MaximeLanglet\miniconda3\lib\site-packages\openai\_utils\_utils.py", line 275, in wrapper
return func(*args, **kwargs)
File "C:\Users\MaximeLanglet\miniconda3\lib\site-packages\openai\resources\chat\completions.py", line 663, in create
return self._post(
File "C:\Users\MaximeLanglet\miniconda3\lib\site-packages\openai\_base_client.py", line 1200, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
File "C:\Users\MaximeLanglet\miniconda3\lib\site-packages\openai\_base_client.py", line 889, in request
return self._request(
File "C:\Users\MaximeLanglet\miniconda3\lib\site-packages\openai\_base_client.py", line 980, in _request
raise self._make_status_error_from_response(err.response) from None
openai.AuthenticationError: Error code: 401 - {'statusCode': 401, 'message': 'Unauthorized. Access token is missing, invalid, audience is incorrect (https://cognitiveservices.azure.com), or have expired.'}
The error message you're encountering indicates an issue with the authentication process with Azure's OpenAI service, specifically pointing out that the access token might be missing, invalid, has an incorrect audience, or may have expired. This suggests a problem with how the authentication parameters are being utilized.
From the code snippet you've shared, it looks like you're initializing the AzureOpenAI class correctly with the necessary parameters. However, considering the error, there might be a discrepancy in how these parameters are being used for authentication.
Here are a few potential solutions and insights based on similar issues:
-
Environment Variables Before Module Import: Ensure you set the environment variable for the API key before importing the
AzureOpenAImodule. This approach has resolved similar issues for other users. -
Using ServiceContext: In some cases, using the
ServiceContextfor initializing theAzureOpenAIclass has been suggested. Although this was specifically mentioned for a different version, it might be worth exploring:
from llama_index import ServiceContext, set_global_service_context
service_context = ServiceContext.from_defaults(
llm=AzureOpenAI(engine="gpt-4-32k", model="gpt-4-32k"),
)
set_global_service_context(service_context)
-
Custom Embeddings Setup: If customization is part of your implementation, ensure both the LLM and embedding model are correctly set in the service context. This is crucial for avoiding reliance on the default OpenAI model.
-
API Key Validation Issue: There was a known bug related to API key validation that has been addressed. Although your version might not be directly affected, checking the specifics of this fix could provide some clues.
If these suggestions do not resolve the issue, more details about how you're setting the environment variables and any additional context about your application's deployment might help in further diagnosing the problem.
Sources
- [Question]: AuthenticationError when using OpenAI Embedding API in llama_index
- [Bug]: openai.AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: xxxxx********************xxxx
- [Question]: When customize the LLM, using VectorStoreIndex, it reports error
- [Bug]: "Invalid OpenAI API key" error with valid Azure OpenAI key
- llama-index-integrations/llms/llama-index-llms-openai/llama_index/llms/openai/utils.py
About Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
I'm not an azure expert, but seems like some config is missing. This works fine for me on my azure setup
api_key = "<api-key>"
azure_endpoint = "https://<your-resource-name>.openai.azure.com/"
api_version = "2023-07-01-preview"
llm = AzureOpenAI(
model="gpt-35-turbo-16k",
deployment_name="my-custom-llm",
api_key=api_key,
azure_endpoint=azure_endpoint,
api_version=api_version,
)
@logan-markewich thanks for the response. Which config are you referring to precisely ? can't seem to find something wrong
I meant some setup for for your LLM seems incorrect, otherwise you wouldn't get that error.
The above code I posted works for my azure deployment 🤷🏻♂️ may even double check the values are being set as expected (I see you are using os envs, maybe try hardcoding to ensure?)
Hi @MaximeLanglet did you found any solution regarding this issue?
Boosting this again, was a solution found for this?