crewAI
crewAI copied to clipboard
[BUG] Authentication Error When Using OpenAI Compatible LLMs - Generic error message
Description
When configuring CrewAI to use an OpenAI-compatible LLM provider (not OpenAI itself), the framework incorrectly attempts to validate API keys against OpenAI's authentication servers regardless of the specified base_url. This results in authentication failures with error code 401 even when valid credentials for the alternative provider are supplied.
Steps to Reproduce
- Install the required dependencies:
langchain==0.3.17 langchain-community==0.3.16 langchain-core==0.3.33 crewai==0.100.0 crewai-tools==0.33.0
-
Create a Crew using an Open AI compatible LLM instance as the LLM agent (sabia-3, for example)
-
Don´t instantiate any Open AI credential (API KEY) as we are not using their model
-
Put the "planning" variable of "Crew" as True and leaving the "planning_llm" as "None"
Expected behavior
CrewAI should respect the base_url parameter and send authentication requests to the specified provider's endpoint rather than OpenAI's servers.
Actual Behavior CrewAI (via LiteLLM) attempts to validate the API key against OpenAI's servers regardless of the specified base_url, causing authentication failures.
Screenshots/Code snippets
agent = Agent( role="ROLE", goal="GOAL", backstory="BACKSTORY", llm=LLM( model="openai/sabia-3", temperature=0.7, base_url='https://chat.maritaca.ai/api', api_key="SABIA_API_KEY" ) ) Crew( tasks=[ Task( description="TASK DESCRIPTION", expected_output="EXPECTED OUTPUT", agent=agent ) ], agents=[ agent ], process=Process.sequential, planning=True, cache=True, memory=False, verbose=True ).kickoff()
Operating System
Windows 11
Python Version
3.12
crewAI Version
0.100.0
crewAI Tools Version
0.33.0
Virtual Environment
Venv
Evidence
raise AuthenticationError( litellm.exceptions.AuthenticationError: litellm.AuthenticationError: AuthenticationError: OpenAIException - Error code: 401 - {'error': {'message': 'Incorrect API key provided: asd43bvc**************************xadv. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}
Possible Solution
When you indicate a "planning_llm" inside the "Crew" it solves the error. The big problem is, i spent over 3 days trying to figure out why my Crew was trying to comunicate with Open AI API when i hae explicitly told the Crew to use another LLM that was compatible with it. The error message when not using an Open AI model needs to change, in order to avoid letting users loose their minds.
One solution can be add a more clear message in the documentation telling about the dependency between the "planning" parameter and the "planning_llm" parameter, as many users doens´t use Open AI to run a crew.
Another solution is to change the error message in order to be more clear about the error is really about.
Additional context
...
Can you help me with one info?
when you are saying that planning_llm is given then it resolves the issue, what did you set the planning llm as
LLM(
model="openai/sabia-3",
temperature=0.7,
base_url='https://chat.maritaca.ai/api',
api_key="SABIA_API_KEY"
)
also, can you confirm whether with your llm defined is compatible with the Litellm, you can try this example given in here?
from crewai import LLM
class Dog(BaseModel):
name: str
age: int
breed: str
llm = LLM(model="gpt-4o", response_format=Dog)
response = llm.call(
"Analyze the following messages and return the name, age, and breed. "
"Meet Kona! She is 3 years old and is a black german shepherd."
)
print(response)
# Output:
# Dog(name='Kona', age=3, breed='black german shepherd')
double checking.. what happens when you run it:
llm = LLM(
model="openai/sabia-3",
temperature=0.7,
base_url='https://chat.maritaca.ai/api',
api_key="SABIA_API_KEY"
)
llm.call(messages=[{"role": "user", "content": "Hello, how are you?"}])
Does it works?
Yes, the real problem is the error message that misguided me to think that the LLM instance i was using had something wrong. But the real problem was that i was telling my crew that it had to plan before executing but i dind´t provided a planning agent. So when i provided a planning agent with any LLM that LiteLLM supports, it worked.
This PR should fix this problem, but it seems that was discarded...
https://github.com/crewAIInc/crewAI/pull/2649
Now I understand this,
Basically when you do not send a particular planning agent, by default crewai itself hendles that planning
and for that the default llm which is being used is
self.planning_agent_llm = "gpt-4o-mini"
This explains the why auth is happening and it is asking for a OPENAI key.
Just one more clarification i would require, you can only send a planning llm right, as planning agent is particularly taken care by crewai itself.
@lucasgomide, do you think another feature could be added?
rather than defaulting directly to gpt-4o-mini.
We can make a check on the at the crew level whether a llm instance is given or not and maybe warn the user that because planning llm is not given crew is defaulting to llm given at crew level.
Also do you think checking at the agent level for a llm instance would be a good idea as I think done by devin? It could also happen that different agents have different supported llm, which one to prefer
Another solution could be rather than defaulting to gpt-4o-mini, we can make a validation check that if planning = True, then planning llm needs to be set as well!
This issue is stale because it has been open for 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.
This issue was closed because it has been stalled for 5 days with no activity.