[Bug]: xAI API Key not working
What happened?
I've generated an API key from xAI, and as I attempt to add the grok-2-latest model, I get this error when I test the API key — it looks like it's trying to use OpenAI to test xAI? Even if I add the model and try to use it I get the same error.
I am using the following version of LiteLLM as my admin account wasn't working properly when I set it up:
git clone --branch v1.63.6-nightly --single-branch https://github.com/BerriAI/litellm.git
Relevant log output
litellm.AuthenticationError: AuthenticationError: XaiException - Incorrect API key provided: xai-bs3B************************************************************************41Ub. You can find your API key at https://platform.openai.com/account/api-keys.
stack trace: Traceback (most recent call last):
File "/usr/lib/python3.13/site-packages/litellm/llms/openai/openai.py", line 794, in acompletion
headers, response = await self.make_openai_chat_completion_request(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...<4 lines>...
)
^
File "/usr/lib/python3.13/site-packages/litellm/litellm_core_utils/logging_utils.py", line 131, in async_wrapper
result = await func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.13/site-packages/litellm/llms/openai/openai.py", line 438, in make_openai_chat_completion_request
raise e
File "/usr/lib/python3.13/site-packages/litellm/llms/openai/openai.py", line 420, in make_openai_chat_completion_request
await openai_aclient.chat.completions.with_raw_response.create(
**data, timeout=timeout
)
File "/usr/lib/python3.13/site-packages/openai/_legacy_response.py", line 381, in wrapped
return cast(LegacyAPIResponse[R],
Are you a ML Ops Team?
No
What LiteLLM version are you on ?
v1.63.6-nightly
Twitter / LinkedIn details
No response
@a-rbsn the below code is working for me - can you confirm if you have a different setup?
from litellm import completion
response = completion(
model="xai/grok-2-latest",
messages=[
{"role": "user", "content": "Hello, world!"}
],
)
With the env variable named as XAI_API_KEY
Hi @colesmcintosh — I'm using the dashboard to add models and test models so that I can use Open-WebUI, if I go to the 'Test Key' LLM playground and select grok-2-latest and use my xAI API key, I get this error:
Error occurred while generating model response. Please try again. Error: Error: 401 Authentication Error, LiteLLM Virtual Key expected. Received=xai-4fq*************************************BMT, expected to start with 'sk-'.
Hi @colesmcintosh — I'm using the dashboard to add models and test models so that I can use Open-WebUI, if I go to the 'Test Key' LLM playground and select
grok-2-latestand use my xAI API key, I get this error:Error occurred while generating model response. Please try again. Error: Error: 401 Authentication Error, LiteLLM Virtual Key expected. Received=xai-4fq*************************************BMT, expected to start with 'sk-'.
I can confirm the same is happening here. I can only add them with wildcard, but they still fail with the above error once used
@a-rbsn Your ui error is not a bug. You need to pass the liteLLM api key to call the proxy.
There seems to be a bug here. I simply tried adding grok2 latest to liteLLM in the UI and click the Test Connection button - it always fails. It is trying to open an OpenAI URL (see API request code below), not the xAI URL. If I switch to the model wildcard to bring in all models, then Test Connection works.
curl -X POST
https://api.openai.com/v1/
-H 'Content-Type: application/json'
-d '{
{
"model": "grok-2-latest",
"messages": [
{
"role": "user",
"content": "What's 1 + 1?"
}
],
"extra_body": {}
}
}'
@krrishdholakia — I know how to use LiteLLM, I have other models working fine on Open-WebUI. The issue is with LiteLLM and xAI API. I'll record my screen if you like.
Edit: As Scott has pointed out above, it is when you select specific models within xAI, I'm not a fan of bringing all models into a cluttered list, so when selecting just grok-2-latest — it fails.
Edit: So I tried the wildcard, which added all of xAI models into LiteLLM and my Open-WebUI, but when trying to use grok-2 I get:
401: litellm.AuthenticationError: AuthenticationError: XaiException - Incorrect API key provided: xai-4fqy************************************************************************zBMT. You can find your API key at https://platform.openai.com/account/api-keys.. Received Model Group=xai/grok-2
Available Model Group Fallbacks=None
also even when the Test connection works when selecting all xAI models - when I go over to try and use this with Open Web UI - none of the xAI models actually work - I get the error below.
Other models I've added in, like Claude 3.7 are working just fine - the issue seems to be with xAI.
401: litellm.AuthenticationError: AuthenticationError: XaiException - Incorrect API key provided: xai-w9eE************************************************************************OBno. You can find your API key at https://platform.openai.com/account/api-keys.. Received Model Group=xai/grok-beta Available Model Group Fallbacks=None
Hey @a-rbsn if you can share a recording that would be great (including how the model is being added).
Perhaps this is a UI bug
Not wanting to add to the noise here - just adding a +1 to this. I have the same issue.
Hi @krrishdholakia — I've uploaded a screen recording here, hopefully this clarifies things:
https://www.youtube.com/watch?v=CupoTHX4iPI
The API base url is set to https://api.openai.com/v1/ This should be https://api.x.ai/v1 When adding e.g. an OpenAI model, you can edit the base url via the UI, but for xAI you can't.
This is a UI bug when you add a credential via the UI.
The issue also happens with Perplexity and Fireworks.
https://github.com/BerriAI/litellm/issues/9304
When you try to edit the credential, it shows come up with the OpenAI Base URL
Note: This is the edit credential UI for a Perplexity credential I made. Note that the provider is blank, and the API base URL shows perplexity. If you change provider to perplexity, it doesn't seem to fix the issue.
Ok, I was able to fix xAI by manually adding the API base to the model. It should do this automatically if a provider is set.
I'll pick this up tomorrow.
Also added this to our UI QA checklist to avoid future regressions - https://github.com/BerriAI/litellm/discussions/8495#discussioncomment-12180711
Able to repro. Caused by custom_llm_provider being set as a separate field
Fixed as of v1.63.14!
Are you sure this is fixed? I still have the issue with litellm and X/Ai models. I don't use the GUI, workaround is to manually set the base to https://api.x.ai/v1
Hey @spammenotinoz i can confirm this is working on the latest stable (v1.65.0). Just tried adding the xai model through the UI in our staging environment, and calling via api - it worked as expected.
Thanks for the quick reponse, must have a user error, with your same post I get "litellm.BadRequestError: XaiException - Error code: 400 - {'code': 'Client specified an invalid argument', 'error': 'An empty user message was provided. Every user message needs at least one non-empty content element.'}"
I'll do some testing, must be an Open-WebUI issue but strange all other models work via litellm.
Update: Sorry, yes definitely an Open WebUI issue, deployed a new instance and posting direct with the same config file works.