litellm icon indicating copy to clipboard operation
litellm copied to clipboard

[Bug]: xAI API Key not working

Open a-rbsn opened this issue 10 months ago • 12 comments

What happened?

I've generated an API key from xAI, and as I attempt to add the grok-2-latest model, I get this error when I test the API key — it looks like it's trying to use OpenAI to test xAI? Even if I add the model and try to use it I get the same error.

I am using the following version of LiteLLM as my admin account wasn't working properly when I set it up:

git clone --branch v1.63.6-nightly --single-branch https://github.com/BerriAI/litellm.git

Relevant log output

litellm.AuthenticationError: AuthenticationError: XaiException - Incorrect API key provided: xai-bs3B************************************************************************41Ub. You can find your API key at https://platform.openai.com/account/api-keys.
stack trace: Traceback (most recent call last):
  File "/usr/lib/python3.13/site-packages/litellm/llms/openai/openai.py", line 794, in acompletion
    headers, response = await self.make_openai_chat_completion_request(
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    ...<4 lines>...
    )
    ^
  File "/usr/lib/python3.13/site-packages/litellm/litellm_core_utils/logging_utils.py", line 131, in async_wrapper
    result = await func(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.13/site-packages/litellm/llms/openai/openai.py", line 438, in make_openai_chat_completion_request
    raise e
  File "/usr/lib/python3.13/site-packages/litellm/llms/openai/openai.py", line 420, in make_openai_chat_completion_request
    await openai_aclient.chat.completions.with_raw_response.create(
        **data, timeout=timeout
    )
  File "/usr/lib/python3.13/site-packages/openai/_legacy_response.py", line 381, in wrapped
    return cast(LegacyAPIResponse[R],

Are you a ML Ops Team?

No

What LiteLLM version are you on ?

v1.63.6-nightly

Twitter / LinkedIn details

No response

a-rbsn avatar Mar 16 '25 14:03 a-rbsn

@a-rbsn the below code is working for me - can you confirm if you have a different setup?

from litellm import completion

response = completion(
    model="xai/grok-2-latest",
    messages=[
        {"role": "user", "content": "Hello, world!"}
    ],
)

With the env variable named as XAI_API_KEY

colesmcintosh avatar Mar 16 '25 15:03 colesmcintosh

Hi @colesmcintosh — I'm using the dashboard to add models and test models so that I can use Open-WebUI, if I go to the 'Test Key' LLM playground and select grok-2-latest and use my xAI API key, I get this error:

Error occurred while generating model response. Please try again. Error: Error: 401 Authentication Error, LiteLLM Virtual Key expected. Received=xai-4fq*************************************BMT, expected to start with 'sk-'.

a-rbsn avatar Mar 16 '25 15:03 a-rbsn

Hi @colesmcintosh — I'm using the dashboard to add models and test models so that I can use Open-WebUI, if I go to the 'Test Key' LLM playground and select grok-2-latest and use my xAI API key, I get this error:

Error occurred while generating model response. Please try again. Error: Error: 401 Authentication Error, LiteLLM Virtual Key expected. Received=xai-4fq*************************************BMT, expected to start with 'sk-'.

I can confirm the same is happening here. I can only add them with wildcard, but they still fail with the above error once used

theonewizard avatar Mar 16 '25 15:03 theonewizard

@a-rbsn Your ui error is not a bug. You need to pass the liteLLM api key to call the proxy.

krrishdholakia avatar Mar 16 '25 19:03 krrishdholakia

There seems to be a bug here. I simply tried adding grok2 latest to liteLLM in the UI and click the Test Connection button - it always fails. It is trying to open an OpenAI URL (see API request code below), not the xAI URL. If I switch to the model wildcard to bring in all models, then Test Connection works.


curl -X POST
https://api.openai.com/v1/
-H 'Content-Type: application/json'
-d '{ { "model": "grok-2-latest", "messages": [ { "role": "user", "content": "What's 1 + 1?" } ], "extra_body": {} } }'

scottrhay avatar Mar 16 '25 21:03 scottrhay

@krrishdholakia — I know how to use LiteLLM, I have other models working fine on Open-WebUI. The issue is with LiteLLM and xAI API. I'll record my screen if you like.

Edit: As Scott has pointed out above, it is when you select specific models within xAI, I'm not a fan of bringing all models into a cluttered list, so when selecting just grok-2-latest — it fails.

Edit: So I tried the wildcard, which added all of xAI models into LiteLLM and my Open-WebUI, but when trying to use grok-2 I get:

401: litellm.AuthenticationError: AuthenticationError: XaiException - Incorrect API key provided: xai-4fqy************************************************************************zBMT. You can find your API key at https://platform.openai.com/account/api-keys.. Received Model Group=xai/grok-2
Available Model Group Fallbacks=None

a-rbsn avatar Mar 16 '25 21:03 a-rbsn

also even when the Test connection works when selecting all xAI models - when I go over to try and use this with Open Web UI - none of the xAI models actually work - I get the error below.

Other models I've added in, like Claude 3.7 are working just fine - the issue seems to be with xAI.


401: litellm.AuthenticationError: AuthenticationError: XaiException - Incorrect API key provided: xai-w9eE************************************************************************OBno. You can find your API key at https://platform.openai.com/account/api-keys.. Received Model Group=xai/grok-beta Available Model Group Fallbacks=None

scottrhay avatar Mar 16 '25 21:03 scottrhay

Hey @a-rbsn if you can share a recording that would be great (including how the model is being added).

krrishdholakia avatar Mar 17 '25 01:03 krrishdholakia

Perhaps this is a UI bug

krrishdholakia avatar Mar 17 '25 01:03 krrishdholakia

Not wanting to add to the noise here - just adding a +1 to this. I have the same issue.

u5rg2t avatar Mar 17 '25 09:03 u5rg2t

Hi @krrishdholakia — I've uploaded a screen recording here, hopefully this clarifies things:

https://www.youtube.com/watch?v=CupoTHX4iPI

a-rbsn avatar Mar 17 '25 09:03 a-rbsn

The API base url is set to https://api.openai.com/v1/ This should be https://api.x.ai/v1 When adding e.g. an OpenAI model, you can edit the base url via the UI, but for xAI you can't.

captain-nemo avatar Mar 17 '25 12:03 captain-nemo

This is a UI bug when you add a credential via the UI.

The issue also happens with Perplexity and Fireworks.

https://github.com/BerriAI/litellm/issues/9304

jamie-dit avatar Mar 18 '25 02:03 jamie-dit

When you try to edit the credential, it shows come up with the OpenAI Base URL

Note: This is the edit credential UI for a Perplexity credential I made. Note that the provider is blank, and the API base URL shows perplexity. If you change provider to perplexity, it doesn't seem to fix the issue.

Image

jamie-dit avatar Mar 18 '25 02:03 jamie-dit

Ok, I was able to fix xAI by manually adding the API base to the model. It should do this automatically if a provider is set.

Image

jamie-dit avatar Mar 18 '25 03:03 jamie-dit

I'll pick this up tomorrow.

Also added this to our UI QA checklist to avoid future regressions - https://github.com/BerriAI/litellm/discussions/8495#discussioncomment-12180711

krrishdholakia avatar Mar 18 '25 04:03 krrishdholakia

Able to repro. Caused by custom_llm_provider being set as a separate field

krrishdholakia avatar Mar 18 '25 17:03 krrishdholakia

Fixed as of v1.63.14!

krrishdholakia avatar Mar 23 '25 00:03 krrishdholakia

Are you sure this is fixed? I still have the issue with litellm and X/Ai models. I don't use the GUI, workaround is to manually set the base to https://api.x.ai/v1

spammenotinoz avatar Apr 02 '25 00:04 spammenotinoz

Hey @spammenotinoz i can confirm this is working on the latest stable (v1.65.0). Just tried adding the xai model through the UI in our staging environment, and calling via api - it worked as expected.

Image Image

krrishdholakia avatar Apr 02 '25 00:04 krrishdholakia

Thanks for the quick reponse, must have a user error, with your same post I get "litellm.BadRequestError: XaiException - Error code: 400 - {'code': 'Client specified an invalid argument', 'error': 'An empty user message was provided. Every user message needs at least one non-empty content element.'}"

I'll do some testing, must be an Open-WebUI issue but strange all other models work via litellm.

Update: Sorry, yes definitely an Open WebUI issue, deployed a new instance and posting direct with the same config file works.

spammenotinoz avatar Apr 02 '25 00:04 spammenotinoz