crewAI
crewAI copied to clipboard
[BUG] o4-mini usage raising stop parameter not support exception
Description
When attempting to use o4-mini as the Model for an agent, the OpenAI API returns an error indicating that the stop parameter is not supported.
Stacktrace:
File "/usr/local/lib/python3.12/site-packages/crewai/agents/crew_agent_executor.py", line 208, in _invoke_loop
raise e
File "/usr/local/lib/python3.12/site-packages/crewai/agents/crew_agent_executor.py", line 155, in _invoke_loop
answer = get_llm_response(
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/crewai/utilities/agent_utils.py", line 157, in get_llm_response
raise e
File "/usr/local/lib/python3.12/site-packages/crewai/utilities/agent_utils.py", line 148, in get_llm_response
answer = llm.call(
^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/crewai/llm.py", line 794, in call
return self._handle_non_streaming_response(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/agents/utilities.py", line 90, in _handle_non_streaming_response
return super()._handle_non_streaming_response(params, callbacks, available_functions)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/crewai/llm.py", line 630, in _handle_non_streaming_response
response = litellm.completion(**params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/litellm/utils.py", line 1154, in wrapper
raise e
File "/usr/local/lib/python3.12/site-packages/litellm/utils.py", line 1032, in wrapper
result = original_function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/litellm/main.py", line 3068, in completion
raise exception_type(
^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py", line 2201, in exception_type
raise e
File "/usr/local/lib/python3.12/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py", line 326, in exception_type
raise BadRequestError(
litellm.exceptions.BadRequestError: litellm.BadRequestError: OpenAIException - Error code: 400 - {'error': {'message': "Unsupported parameter: 'stop' is not supported with this model.", 'type': 'invalid_request_error', 'param': 'stop', 'code': 'unsupported_parameter'}}
Code that adds the erroneous parameter: https://github.com/crewAIInc/crewAI/blob/6d0039b117b7970f01d79ed58d20efa20573fb22/src/crewai/llm.py#L341
Solution:
By removing the stop parameter in a custom LLM subclass the issue goes away. Example:
class LLMWithFixedStopWords(LLM):
def _handle_streaming_response(self, params, callbacks, available_functions):
if "o4-mini" in self.model:
params.pop("stop", None)
return super()._handle_streaming_response(params, callbacks, available_functions)
def _handle_non_streaming_response(self, params, callbacks, available_functions):
if "o4-mini" in self.model:
params.pop("stop", None)
return super()._handle_non_streaming_response(params, callbacks, available_functions)
Steps to Reproduce
- Create any agent that uses
o4-minias the model - Run the agent with any task
Expected behavior
Agent runs successfully
Screenshots/Code snippets
See in description
Operating System
Ubuntu 20.04
Python Version
3.11
crewAI Version
0.114
crewAI Tools Version
N/A
Virtual Environment
Venv
Evidence
See description
Possible Solution
See description
Additional context
See description
hey @tspecht
I gotcha the same issue last week with other models at all. I hope push a solution by the end of the week
Hey Everyone,
This is what I did and it allows the calls to the models to go through:
# Patch for BadRequestError: Error code: 400 - {'error': {'message': "Unsupported parameter: 'stop' is not supported with this model.", 'type': 'invalid_request_error', 'param': 'stop', 'code': 'unsupported_parameter'}}
# ── 1. Imports ──────────────────────────────────────────────────────────────
from google.colab import userdata
import os
import litellm
# Whatever wrapper you’re using:
from crewai import LLM # ⬅️ or `from langchain_openai import ChatOpenAI as LLM`
# ── 2. Monkey-patch LiteLLM to strip the unsupported `stop` field ───────────
original_completion = litellm.completion
def patched_completion(*args, **kwargs):
if "stop" in kwargs:
print("Removing 'stop' parameter from LiteLLM call …")
kwargs.pop("stop")
return original_completion(*args, **kwargs)
litellm.completion = patched_completion # <-- Patch must happen *before* any LLM is instantiated
# ── 3. API key ──────────────────────────────────────────────────────────────
os.environ["OPENAI_API_KEY"] = userdata.get("openai_api_key")
# ── 4. LLM instances ────────────────────────────────────────────────────────
llm_model = "gpt-4o" # preview model
my_llm = LLM(
model=llm_model,
temperature="high",
api_key=os.environ["OPENAI_API_KEY"]
)
my_llm2 = LLM(
model="o3",
temperature="high",
api_key=os.environ["OPENAI_API_KEY"]
)
my_llm3 = LLM(
model="o1",
temperature="high",
api_key=os.environ["OPENAI_API_KEY"]
)
just submitted this PR to address that https://github.com/crewAIInc/crewAI/pull/2742/files
Now you can simply set stop=None directly in your LLM to remove this flag from LiteLLM calls
@lucasgomide, hey, I know I'm pretty late with this suggestion, especially since the PR is already quite far along, and I've only tested this using o4 Mini on OpenRouter, but wouldn't it just be enough to drop this param using additional_drop_params, as stated in LiteLLM's docs, directly in the LLM definition? Something like:
from crewai import LLM, Agent
o4_mini_llm = LLM(
# [...] other LLM configuration parameters
additional_drop_params=["stop"] # 👈
)
test_agent = Agent(
# [...] other Agent configuration parameters
llm=o4_mini_llm
)
@lucasgomide, hey, I know I'm pretty late with this suggestion, especially since the PR is already quite far along, and I've only tested this using o4 Mini on OpenRouter, but wouldn't it just be enough to drop this param using
additional_drop_params, as stated in LiteLLM's docs, directly in theLLMdefinition? Something like:from crewai import LLM, Agent
o4_mini_llm = LLM( # [...] other LLM configuration parameters additional_drop_params=["stop"] # 👈 )
test_agent = Agent( # [...] other Agent configuration parameters llm=o4_mini_llm )
Hey @mouramax no worries dude (:
I've tested it here, and you’re absolutely right. My PR isn't needed to address this issue we can use the LiteLLM feature to handle it. Thanks a lot! (*:
@lucasgomide with additional_drop_params=["stop"], it says "Tool is currently inaccessible."
@lucasgomide with additional_drop_params=["stop"], it says "Tool is currently inaccessible."
I tested it yesterday.. would you mind to share how you are calling it?
@lucasgomide with additional_drop_params=["stop"], it says "Tool is currently inaccessible."
I tested it yesterday.. would you mind to share how you are calling it?
I have a custom tool, it doesn't run when I set additional_drop_params=["stop"] inside LLM() with o3 model. The tool runs just fine with o3-mini without additional_drop_params=["stop"]
Quick disclaimer: I'm not an OpenAI user myself, so I couldn't actually test this with the o3 (bigger) model since it's not available on OpenRouter.
The straightforward example below sets up a custom tool just for testing this out. If you guys get a chance, could you try running it with the o3 model?
from crewai import Agent, Task, Crew, LLM, Process
from crewai.tools import tool
import os
os.environ["OPENROUTER_API_KEY"] = "<KEY>"
@tool
def character_counter(text: str) -> str:
"""
Counts the total number of characters (including spaces and punctuation)
in the provided text string. Use this tool when you need to know the
exact length of a string. Input must be a single string.
"""
char_count = len(text)
return f"Your text is {char_count} characters long."
o3_llm = LLM(
model="openrouter/openai/o3",
temperature=0.8,
max_completion_tokens=300,
additional_drop_params=["stop"]
)
poet_agent = Agent(
role="Verse Virtuoso",
goal="Craft a concise poem on the given topic.",
backstory=(
"A master wordsmith, you transform concepts into evocative poetry, "
"always adhering to strict length constraints (400-600 characters) "
"and focusing purely on the poetic form."
),
tools=[character_counter],
llm=o3_llm,
allow_delegation=False,
verbose=True
)
poem_writing_task = Task(
description=(
"Compose a poem about the given topic: '{topic}'. The poem's total "
"character count (including all spaces and punctuation) MUST be "
"strictly between 400 and 600 characters. Your final output must "
"be ONLY the poem itself, with no other text."
),
expected_output=(
"A single text block containing the complete poem. The poem's "
"total character count must strictly be between 400 and 600."
),
agent=poet_agent
)
poetry_crew = Crew(
agents=[poet_agent],
tasks=[poem_writing_task],
process=Process.sequential,
verbose=True
)
result = poetry_crew.kickoff(
inputs={
"topic": "LLMs stop words"
}
)
print(f"\n[🤖 Final Poem]\n{result.raw}\n")