crewAI
crewAI copied to clipboard
[BUG] Incorrect `supports_response_schema` for OpenRouter models prevents structured output usage
Description
When using litellm with models accessed via the OpenRouter provider, the supports_response_schema function currently returns False.
This happens because OpenRouter is not explicitly listed among the providers that globally support structured outputs (PROVIDERS_GLOBALLY_SUPPORT_RESPONSE_SCHEMA), and it appears there is no programmatic way via OpenRouter's API to check per-model whether a specific model supports the response_format parameter. As a result, the check defaults to False.
This causes issues for applications built on litellm, such as crewAI, which rely on this check to determine whether to include the response_format parameter in the API request. If supports_response_schema is False, the response_format is omitted, breaking functionality that expects structured output.
OpenRouter does support structured outputs for some models that are accessible through their API (e.g., OpenAI's GPT-4o, Fireworks models), as stated in their documentation: https://openrouter.ai/docs#structured-outputs (see the "Model Support" section).
Since litellm cannot reliably determine per-model support via the OpenRouter API, the current automatic check is insufficient and blocks valid use cases.
Steps to Reproduce
Steps to Reproduce
-
Prerequisites:
- Have Python installed.
- Install
crewai(specifically version 0.117.0),litellm, andpydanticusinguv(orpip):uv tool install crewai==0.117.0 - Obtain an OpenRouter API key and set it as an environment variable:
export OPENROUTER_API_KEY='sk-or-...'
-
Create CrewAI Project: Use the CrewAI CLI to create a new project flow:
crewai create flow projectname cd projectname -
Modify
src/projectname/main.py: Open thesrc/projectname/main.pyfile (or equivalent main entry point in your flow) and make the following changes:a. Initialize the OpenRouter LLM: Replace the default LLM initialization with your OpenRouter configuration. Ensure the
OPENROUTER_API_KEYenvironment variable is checked.# Initialize the LLM OPENROUTER_API_KEY = os.getenv("OPENROUTER_API_KEY") mistralLLM: BaseLLM = BaseLLM( model="openrouter/mistralai/mistral-small-3.1-24b-instruct", base_url="https://openrouter.ai/api/v1", api_key=OPENROUTER_API_KEY, temperature=0.0, seed=1984, stream=True, # Add additional params for OpenRouter routing preference additional_params = { "provider": { "order": ["mistralai"], # Note: Use provider name 'mistralai' or 'openai' etc. here, not model names "allow_fallbacks": False, "require_parameters": True, # This requires the provider to support all params sent, including response_format if sent }, } ) llm = mistralLLM # Assign your OpenRouter LLM to 'llm' variable used by agents/tasks -
Run the Flow: Execute the CrewAI flow using the kickoff command:
crewai flow kickoff
Expected behavior
Allow user to manualy set supports_response_schema
Screenshots/Code snippets
# Initialize the LLM
llm = mistralLLM
llm.response_format = GuideOutline
llm.additional_params = {
"provider": {
"order": ["Mistral"],
"allow_fallbacks": False,
"require_parameters": True,
},
}
Operating System
Windows 11
Python Version
3.12
crewAI Version
0.117.0
crewAI Tools Version
flow
Virtual Environment
Venv
Evidence
PS /path/to/project/> crewai flow kickoff
Running the Flow
╭────────────────────────────────────────────────────────────────── Flow Execution ──────────────────────────────────────────────────────────────────╮
│ │
│ Starting Flow Execution │
│ Name: GuideCreatorFlow │
│ ID: [FLOW_ID] │
│ │
│ │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
🌊 Flow: GuideCreatorFlow
ID: [FLOW_ID]
└── 🧠 Starting Flow...
Flow started with ID: [FLOW_ID]
🌊 Flow: GuideCreatorFlow
ID: [FLOW_ID]
├── 🧠 Starting Flow...
└── 🔄 Running: get_user_input
=== Create Your Comprehensive Guide ===
What topic would you like to create a guide for? gacha games
Who is your target audience? (beginner/intermediate/advanced) beginner
Creating a guide on gacha games for beginner audience...
🌊 Flow: GuideCreatorFlow
ID: [FLOW_ID]
├── Flow Method Step
└── ✅ Completed: get_user_input
🌊 Flow: GuideCreatorFlow
ID: [FLOW_ID]
├── Flow Method Step
├── ✅ Completed: get_user_input
└── 🔄 Running: create_guide_outline
Creating guide outline...
🌊 Flow: GuideCreatorFlow
ID: [FLOW_ID]
├── Flow Method Step
├── ✅ Completed: get_user_input
└── ❌ Failed: create_guide_outline
[Flow._execute_single_listener] Error in method create_guide_outline: The model openrouter/mistralai/mistral-small-3.1-24b-instruct does not support response_format for provider 'openrouter'. Please remove response_format or use a supported model.
Traceback (most recent call last):
File "/path/to/project/.venv/Lib/site-packages/crewai/flow/flow.py", line 1030, in _execute_single_listener
listener_result = await self._execute_method(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/path/to/project/.venv/Lib/site-packages/crewai/flow/flow.py", line 876, in _execute_method
raise e
File "/path/to/project/.venv/Lib/site-packages/crewai/flow/flow.py", line 846, in _execute_method
else method(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^
File "/path/to/project/src/alicecrewai/main.py", line 100, in create_guide_outline
response = llm.call(messages=messages)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/path/to/project/.venv/Lib/site-packages/crewai/llm.py", line 857, in call
self._validate_call_params()
File "/path/to/project/.venv/Lib/site-packages/crewai/llm.py", line 999, in _validate_call_params
raise ValueError(
ValueError: The model openrouter/mistralai/mistral-small-3.1-24b-instruct does not support response_format for provider 'openrouter'. Please remove response_format or use a supported model.
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/path/to/project/.venv/Scripts/kickoff.exe/__main__.py", line 10, in <module>
File "/path/to/project/src/alicecrewai/main.py", line 182, in kickoff
GuideCreatorFlow().kickoff()
File "/path/to/project/.venv/Lib/site-packages/crewai/flow/flow.py", line 722, in kickoff
return asyncio.run(run_flow())
^^^^^^^^^^^^^^^^^^^^^^^
File "/user/path/AppData/Roaming/uv/python/cpython-3.12.9-windows-x86_64-none/Lib/asyncio/runners.py", line 195, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/user/path/AppData/Roaming/uv/python/cpython-3.12.9-windows-x86_64-none/Lib/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/user/path/AppData/Roaming/uv/python/cpython-3.12.9-windows-x86_64-none/Lib/asyncio/base_events.py", line 691, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/path/to/project/.venv/Lib/site-packages/crewai/flow/flow.py", line 720, in run_flow
return await self.kickoff_async(inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/path/to/project/.venv/Lib/site-packages/crewai/flow/flow.py", line 787, in kickoff_async
await asyncio.gather(*tasks)
File "/path/to/project/.venv/Lib/site-packages/crewai/flow/flow.py", line 823, in _execute_start_method
await self._execute_listeners(start_method_name, result)
File "/path/to/project/.venv/Lib/site-packages/crewai/flow/flow.py", line 935, in _execute_listeners
await asyncio.gather(*tasks)
File "/path/to/project/.venv/Lib/site-packages/crewai/flow/flow.py", line 1030, in _execute_single_listener
listener_result = await self._execute_method(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/path/to/project/.venv/Lib/site-packages/crewai/flow/flow.py", line 876, in _execute_method
raise e
File "/path/to/project/.venv/Lib/site-packages/crewai/flow/flow.py", line 846, in _execute_method
else method(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^
File "/path/to/project/src/alicecrewai/main.py", line 100, in create_guide_outline
response = llm.call(messages=messages)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/path/to/project/.venv/Lib/site-packages/crewai/llm.py", line 857, in call
self._validate_call_params()
File "/path/to/project/.venv/Lib/site-packages/crewai/llm.py", line 999, in _validate_call_params
raise ValueError(
ValueError: The model openrouter/mistralai/mistral-small-3.1-24b-instruct does not support response_format for provider 'openrouter'. Please remove response_format or use a supported model.
An error occurred while running the flow: Command '['uv', 'run', 'kickoff']' returned non-zero exit status 1.
Possible Solution
Suggested Solution
To address this, I propose adding a mechanism to manually override or force the supports_response_schema check specifically for the OpenRouter provider.
A simple approach could be introducing a configuration option or a flag that users can set when they know their chosen OpenRouter model does support structured output.
For example, within the supports_response_schema function (or a related configuration layer), a check could be added like:
# Inside litellm.supports_response_schema
def supports_response_schema(
model: str, custom_llm_provider: Optional[str] = None
) -> bool:
# ... (existing get_llm_provider logic) ...
# --- ADDITION START ---
# Check for manual override for OpenRouter
# This assumes a mechanism like `litellm.force_response_schema_support_for_openrouter = True` exists
# Or perhaps a provider-specific flag setting
if custom_llm_provider == litellm.LlmProviders.OPENROUTER:
# Replace this check with the actual configuration mechanism
if getattr(litellm, '_openrouter_force_structured_output', False):
verbose_logger.debug("Manually forcing response schema support for OpenRouter.")
return True
# If no manual override, proceed with existing checks or default behavior
# --- ADDITION END ---
# providers that globally support response schema
PROVIDERS_GLOBALLY_SUPPORT_RESPONSE_SCHEMA = [
litellm.LlmProviders.PREDIBASE,
litellm.LlmProviders.FIREWORKS_AI,
]
if custom_llm_provider in PROVIDERS_GLOBALLY_SUPPORT_RESPONSE_SCHEMA:
return True
# ... (rest of the existing _supports_factory logic) ...
return _supports_factory(
model=model,
custom_llm_provider=custom_llm_provider,
key="supports_response_schema",
)
This would require users to:
- Know that their specific OpenRouter model supports structured output.
- Set a corresponding flag (e.g.,
litellm._openrouter_force_structured_output = True) before making calls via OpenRouter where structured output is needed.
This manual override would bypass the currently failing automatic check and allow the response_format parameter to be passed to OpenRouter, enabling structured output functionality for compatible models.
Additional context
None
Hey @Mateleo. When I first started using CrewAI, I ran into a similar issue and actually posted about it, which you can see here.
As a newbie trying to get my head around the framework's design, I initially thought the LLM class was basically a wrapper on steroids for the LiteLLM library. I figured sophisticated features like tools and structured output would be baked right into the LLM objects natively.
Turns out, those fancier features are handled by higher layers in the framework, like Agent, Task, and Crew. The code snippet below explores the different ways structured output works (and sometimes doesn't) in CrewAI. I'm using the exact same model you mentioned (mistral-small-3.1-24b-instruct) on the same provider (OpenRouter):
from litellm import completion, get_supported_openai_params
from crewai import LLM, Agent, Task, Crew, Process
from pydantic import BaseModel
from typing import List
from pprint import pprint
import os
# --- Configuration ---
os.environ['OPENROUTER_API_KEY'] = 'YOUR-KEY'
LLM_PROVIDER = "openrouter"
LLM_MODEL = "mistralai/mistral-small-3.1-24b-instruct"
LLM_MODEL_QUALIFIED = f"{LLM_PROVIDER}/{LLM_MODEL}"
class CalendarEvent(BaseModel):
name: str
date: str
participants: List[str]
EVENT_UNSTRUCTURED_TEXT= "Alice and Bob are going to a science fair on Friday."
# --- Step 1: Check 'response_format' Support ---
try:
supported_parameters = get_supported_openai_params(
model=LLM_MODEL,
custom_llm_provider=LLM_PROVIDER
)
if "response_format" in supported_parameters:
print("\n--> SUCCESS: 'response_format' is supported by this model.")
else:
print("\n--> INFO: 'response_format' is NOT supported by this model.")
except Exception as e:
print(f"\n--> ERROR: Could not check parameter support. Details:\n{e}")
# --- Step 2: Test litellm.completion with response_format ---
litellm_messages = [
{"role": "system", "content": "Extract the event information."},
{"role": "user", "content": EVENT_UNSTRUCTURED_TEXT},
]
try:
litellm_raw_response = completion(
model=LLM_MODEL_QUALIFIED,
messages=litellm_messages,
response_format=CalendarEvent,
)
print("\n--> Raw response from litellm.completion:")
pprint(litellm_raw_response)
except Exception as e:
error_message = f"ERROR during litellm.completion call:\n{e}"
print(f"\n--> {error_message}")
# --- Step 3: Test crewai.LLM with response_format ---
crewai_llm = LLM(
model=LLM_MODEL_QUALIFIED,
temperature=0.7,
response_format=CalendarEvent,
)
try:
crewai_raw_response = crewai_llm.call(
"Extract the event information:\n\n"
f"{EVENT_UNSTRUCTURED_TEXT}"
)
print("\n--> Raw response from crewai_llm.call:")
pprint(crewai_raw_response)
except Exception as e:
error_message = f"ERROR during crewai.LLM call:\n{e}"
print(f"\n--> {error_message}")
crewai_structured_response = error_message
# --- Step 4: Test a full crewai.Crew with output_pydantic ---
crewai_llm = LLM(
model=LLM_MODEL_QUALIFIED,
temperature=0.7,
)
information_extractor_agent = Agent(
role="Information Extractor",
goal=(
"Accurately extract event details (name, date, participants) "
"from text."
),
backstory=(
"You are an AI assistant specialized in parsing unstructured text "
"and extracting key pieces of information into a predefined "
"structured format. You pay close attention to identifying event "
"names, specific dates or days, and the individuals involved."
),
llm=crewai_llm,
verbose=False,
allow_delegation=False,
)
extract_event_task = Task(
description=(
"Analyze the following text and extract the calendar event "
"details. Focus precisely on the event's name, the date/day "
"it occurs, and the list of participants mentioned.\n"
"Input Text: '{event_text}'"
),
expected_output=(
"A structured object conforming to the CalendarEvent model."
),
agent=information_extractor_agent,
output_pydantic=CalendarEvent,
)
event_crew = Crew(
agents=[information_extractor_agent],
tasks=[extract_event_task],
process=Process.sequential,
verbose=False
)
try:
crewai_response = event_crew.kickoff(
inputs={
"event_text": EVENT_UNSTRUCTURED_TEXT
},
)
print("\n--> Crew Execution Result (Structured Output):")
pprint(crewai_response.pydantic)
except Exception as e:
print(f"\n--> ERROR during Crew execution: {e}")
As you can see:
litellm.get_supported_openai_paramsprovides a way to check forresponse_formatsupport.litellm.completioncan generate structured output directly, ready to be parsed, if the underlying model supports it via parameters likeresponse_format.- Trying to get structured output directly from a
crewai.LLMinstance fails – this seems to be the root of the error you saw. - However, a full
crewai.Crew, using that sameLLMinstance internally, can successfully generate a structuredPydantic.BaseModelobject when you specifyoutput_pydanticin theTask.
So yeah, there's definitely an issue if you try to use structured output directly on the LLM class or its subclasses. But, when that same LLM object is used as part of the machinery within a Crew, you do get the structured output you're looking for at the end. It's just handled at a higher level.
Hmm interesting.. We have a check-feature to ensure the response_format is supported by model.
We should fix it if it’s not working properly by adding a better support for OpenRouter. I’d really like to see someone contribute a fix for that
@lucasgomide,
So, the issue is that crewai.LLM._validate_call_params depends on litellm.utils.supports_response_schema. This, in turn, relies on the file model_prices_and_context_window.json being kept up-to-date with info for every single model and provider out there. I mean, you gotta figure this LiteLLM design choice was made back when there were maybe 5 or 6 models total in the world, right?
Another way to go is using litellm.get_supported_openai_params to check the parameters, like I showed in the code snippet above.
But the key thing is, when you use a crewai.LLM where it's actually meant to be used – specifically within an Agent.llm – everything works just fine, as my example clearly demonstrates. If the user uses the LLM object the way the framework expects, things run smoothly, and CrewAI has no problem generating structured output normally.
@mouramax, your observations are just mind-blowing.
If I understand correctly, with output-pydantic in task, this issue will get sorted.
I also believe that the llm call we make, should be able to support the response_schema parameter, independent of litellm.
I can also add a support PR for this.
What I was thinking to do is
- bypass the
supports_response_schemafrom litellm, specically for openrouter. Add a custom check for open-router. - List of models which are currently supported with response schema with open-router, can be founded here: https://openrouter.ai/docs/features/structured-outputs,https://openrouter.ai/models?order=newest&supported_parameters=structured_outputs
- Another thing I think which we need to change in here would be the response_schema, currently it is accepting a Pydantic Class, I think this needs to be a json, https://openrouter.ai/docs/features/structured-outputs.
Let me know If you guys have any suggestion on this, will try to add a PR by tomorrow. @lucasgomide @mouramax @Mateleo
@Vidit-Ostwal,
Your initiative is excellent, and I'm sure the others involved in this Issue will provide valuable contributions to your eventual PR. I'll share my thoughts on this here, but please understand they might differ from the consensus, so always take my opinion with a big grain of salt, okay?
I believe that using a set of if/else statements to try and work around the problem is just bringing an issue that originally belongs to LiteLLM into CrewAI. The fact is, there's no robust way to dynamically get this capability information from every model/provider. That's the real problem. Today, we'll make an exception to handle "OpenRouter," tomorrow it'll be to handle "CloseRouter," the day after for "OpenClosedRouter," and pretty soon we'll have our own version of LiteLLM's problem, you know what I mean?
From my limited perspective, since the issue doesn't occur when a crewai.LLM object is used the way it's intended (as demonstrated in my example), it leads me to believe that, fortunately, CrewAI's core mechanism doesn't even rely on this check. If it did, my example using Task.output_pydantic would throw an error, but instead, it runs perfectly fine using the OP's model. So, it seems much more like a matter of simply eliminating this check rather than trying to force it to work by accounting for a whole bunch of exceptions to the rule. I don't think that approach is sustainable or easy to maintain over time.
When the response mechanism detects an error during output_pydantic validation, an error message at that moment would be enough to let the user know that, unfortunately, that specific provider/model combination doesn't support structured output. In other words, handle it as a runtime error instead of trying to handle it statically upfront. Does that make sense?
By removing the check at the low level, meaning at the crewai.LLM.call level, then users who are interacting with the system at a more advanced (lower) level could implement their own handling for structured output, exactly like they would if using litellm.completion directly.
Those are just my two cents on the matter from an architectural standpoint. I'm sure the others involved, especially the OP, can offer more valuable opinions.
Another way to go is using litellm.get_supported_openai_params to check the parameters, like I showed in the code snippet above.
That’s exactly what I meant to do. But for now, I’m thinking it might be better to fix this in the LiteLLM repo. We have some “workaround” related to the message format that (which is a LiteLLM feature), in my opinion, would be better handled directly in their repo. It could be risky to have too much customization on top of LiteLLM, which is already an amazing wrapper. The key point is that it’s very hard to handle all supported models.
@mouramax, @lucasgomide
Thanks for your contributions.
belongs to LiteLLM into CrewAI. The fact is, there's no robust way to dynamically get this capability information from every model/provider. That's the real problem. my opinion, would be better handled directly in their repo. It could be risky to have too much customization on top of LiteLLM, which is already an amazing wrapper.
In the longer run, I also believe, this needs to be handled by litellm, otherwise as Mouramax said we would start handling the litellm problems in our repo, which won't be a good fix.
When the response mechanism detects an error during output_pydantic validation, an error message at that moment would be enough to let the user know that, unfortunately, that specific provider/model combination doesn't support structured output. In other words, handle it as a runtime error instead of trying to handle it statically upfront. Does that make sense?
Are we sure that if validation of output_pyantic ever fails, the reason if absolutely that the model is not supporting the response-format.
@mouramax Thks a lot, super answer.
It now works for me
@task
def write_section_task(self) -> Task:
return Task(
config=self.tasks_config["write_section_task"], # type: ignore[index]
output_pydantic=Section,
)
@agent
def content_writer(self) -> Agent:
return Agent(
config=self.agents_config["content_writer"], # type: ignore[index]
verbose=True,
llm=mistralLLM,
)
@Vidit-Ostwal
Be careful, because the problem with openrouter models is that each model can have several providers, and each provider can be very different (accept or not Tools, Structured Output, etc.).
At the moment, there's no way of knowing reliably in advance whether such and such a provider for a given model is compatible with CrewAI, and LiteLLM won't make much of a difference.
It's really up to the provider's discretion, and can change from one day to the next (or so the thinking goes).
Thanks to @mouramax for the answer.
I faced the same issue when calling gemma model via lm_studio provider with a pydantic model as response format. apparently there's no issue when using raw call on LiteLLM wrapper but crewai LLM's call method.
Hey everyone! 🚀 I ran into this exact issue and found three working solutions
The Problem: OpenRouter models DO support structured outputs, but LiteLLM's supports_response_schema() returns False for them, causing:
- CrewAI and other frameworks to skip sending
response_format - "Unknown parameter: response_format.response_schema" errors
- Broken structured output functionality
Root Cause: LiteLLM doesn't recognize openrouter as supporting structured outputs + the OpenRouter adapter strips/rewrites nested schemas.
Here's how I fixed it (pick what works for you):
Solution 1: Library Fix for OpenRouter Transformation
For the LiteLLM maintainers: The real issue is in the OpenRouter adapter that strips the response_format. Fix it here:
# Inside OpenrouterConfig.map_openai_params(...) method:
def map_openai_params(self, non_default_params, optional_params, model, drop_params):
# ...existing code...
# Add this block to preserve response_format:
if "response_format" in non_default_params:
rf = non_default_params["response_format"]
if isinstance(rf, dict):
# force LiteLLM to send your response_format exactly as given
mapped_openai_params["response_format"] = rf
# ...existing code...
AND add OpenRouter to the global support list:
PROVIDERS_GLOBALLY_SUPPORT_RESPONSE_SCHEMA = (
"openai",
"anthropic",
"cohere",
"ai21",
"mistral",
"openrouter", # ← Add this line!
)
You need both changes - the global list fix alone won't work because the OpenRouter adapter still strips the schema.
Solution 2: No-Code Workaround (For Users Right Now)
Use extra_body to inject the proper OpenRouter format:
import json, litellm
# Your schema
output_schema = {
"type": "object",
"properties": {
"title": {"type": "string"},
"tags": {"type": "array", "items": {"type": "string"}},
"description": {"type": "string"}
},
"required": ["title", "tags", "description"]
}
# This bypasses all the schema checking
response = litellm.completion(
model="openrouter/mistralai/mistral-small-3.1-24b-instruct",
messages=[{"role": "user", "content": "Analyze this..."}],
extra_body={
"response_format": {
"type": "json_schema",
"json_schema": {
"name": "structured_response",
"strict": True,
"schema": output_schema
}
}
}
)
# Parse the structured response
result = json.loads(response.choices[0].message.content)
print(result)
Works immediately with any OpenRouter model that supports structured outputs!
Solution 3: Manual Library Patch (Advanced Users)
If you want to patch your local installation right now:
# Find the OpenrouterConfig.map_openai_params method and add:
if "response_format" in non_default_params:
rf = non_default_params["response_format"]
if isinstance(rf, dict):
mapped_openai_params["response_format"] = rf
For CrewAI Users Specifically:
Use Solution 2 with your existing setup:
# Your existing LLM setup
mistralLLM = BaseLLM(
model="openrouter/mistralai/mistral-small-3.1-24b-instruct",
base_url="https://openrouter.ai/api/v1",
api_key=OPENROUTER_API_KEY,
temperature=0.0,
# Add this to force structured output:
extra_body={
"response_format": {
"type": "json_schema",
"json_schema": {
"name": "your_schema_name",
"strict": True,
"schema": your_pydantic_model.model_json_schema()
}
}
}
)
💡 LiteLLM Team: This needs both fixes!
- OpenRouter transformation fix - stop stripping
response_formatdicts - Global support list - add
"openrouter"sosupports_response_schema()returns True
The transformation fix is the critical one - without it, even adding to the global list won't work because the schema gets stripped before sending to OpenRouter.
Hope this helps everyone dealing with this! The extra_body workaround is your best bet until the library gets patched. 🔥
This issue is stale because it has been open for 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.
This issue was closed because it has been stalled for 5 days with no activity.
Please someone give proper fix to this
Facing same issue today with gpt-oss please someone address this
Hello all! I am facing the same issue.
I am following the build your first flow guide: https://docs.crewai.com/en/guides/flows/first-flow There it explicitly uses direct LLM calls.
This is the power of flows - combining different types of processing (user interaction, direct LLM calls, crew-based tasks) into a coherent, event-driven system.
# Initialize the LLM
llm = LLM(model="openai/gpt-4o-mini", response_format=GuideOutline)
I am new to CrewAI I have encountered the same problem discussed in this thread, with OpenRouter, while trying to follow this guide.