[Bug]: Podcast generation failed: Invalid json output:
What did you do when it broke?
- Generate Podcast
- I tried
LM Studio: llama3.1 OpenAI server: gpt-5-nano Ollama: llama3.1
- All of them gave me this errorERROR | commands.podcast_commands:generate_podcast_command:171 - Podcast generation failed: Invalid json output and
langchain_core.exceptions.OutputParserException: Invalid json output: ... During task with name 'generate_outline'
- When i used
gpt-4o-miniwith OpenAI server, it works fine.
How did it break?
The issues are
- When using the 'wrong' models that fail to generate proper json output, the errors are not visible on the UI, u need to dig the logs to find it
- What are some recommended models to use with LM Studio and Ollama?
Logs or Screenshots
No response
Open Notebook Version
v1-latest-single (Docker)
Environment
Windows 11
Additional Context
No response
Update: using qwen3:8b with ollama helps, for anyone else encountering the same problem.
Perhaps would be good if we can catch the json error and help the user be aware of it, so that they can switch to other models that have better support for json output.
Update: using LM Studio 0.3.30 and awen3-8b, it gives this error
Podcast generation failed: Error code: 400 - {'error': "'response_format.type' must be 'json_schema' or 'text'"} Error code: 400 - {'error': "'response_format.type' must be 'json_schema' or 'text'"}
==
(updated) sorry, when i submitted information ,i missed previous fix commit info.
looks “try adding 'Do not use thinking tags' to your briefing” dose not work for my case, it failed immediately on notebook side, when my LM Studio just get the request. i am wandering if the model really gave the response.
==========
similar error (openai.BadRequestError: Error code: 400 - {'error': "'response_format.type' must be 'json_schema' or 'text'"}) when use 'qwen3-14b-mlx' in LM studio , same mode used in chat is ok , and when change to ollama qwen3 mode it's ok ,the call stack log attached below:
======================== logs ==============================
[2m2025-11-08T04:27:54.860696Z[0m [35mTRACE[0m [1mdelete_queries[0m: [2msurrealdb::core::kvs::node[0m[2m:[0m Deleting live queries for a connection [2m[3mids[0m[2m=[0m[][0m INFO: 192.168.31.10:7726 - "GET /api/podcasts/episodes HTTP/1.1" 200 OK 2025-11-08 04:27:54.873 | ERROR | commands.podcast_commands:generate_podcast_command:171 - Podcast generation failed: Error code: 400 - {'error': "'response_format.type' must be 'json_schema' or 'text'"} 2025-11-08 04:27:54.873 | ERROR | commands.podcast_commands:generate_podcast_command:172 - Error code: 400 - {'error': "'response_format.type' must be 'json_schema' or 'text'"} Traceback (most recent call last):
File "/app/.venv/bin/surreal-commands-worker", line 10, in
File "/app/commands/podcast_commands.py", line 131, in generate_podcast_command result = await create_podcast( └ <function create_podcast at 0x7f6090d2c680>
File "/app/.venv/lib/python3.12/site-packages/podcast_creator/graph.py", line 150, in create_podcast result = await graph.ainvoke(initial_state, config=config) │ │ │ └ {'configurable': {'outline_provider': 'openai-compatible', 'outline_model': 'qwen3-14b-mlx', 'transcript_provider': 'openai-c... │ │ └ {'content': 'Notebook: Ecopi-benchmark-test\n{\n "sources": [\n {\n "id": "source:vfdbw74fcbf1m20qxvk2",\n "tit... │ └ <function Pregel.ainvoke at 0x7f609f3ae8e0> └ <langgraph.graph.state.CompiledStateGraph object at 0x7f6090d1c2f0> File "/app/.venv/lib/python3.12/site-packages/langgraph/pregel/main.py", line 3182, in ainvoke async for chunk in self.astream( │ │ └ <function Pregel.astream at 0x7f609f3ae7a0> │ └ <langgraph.graph.state.CompiledStateGraph object at 0x7f6090d1c2f0> └ ('values', {'content': 'Notebook: Ecopi-benchmark-test\n{\n "sources": [\n {\n "id": "source:vfdbw74fcbf1m20qxvk2",... File "/app/.venv/lib/python3.12/site-packages/langgraph/pregel/main.py", line 3000, in astream async for _ in runner.atick( │ │ └ <function PregelRunner.atick at 0x7f609f3ac680> │ └ <langgraph.pregel._runner.PregelRunner object at 0x7f6089b94b30> └ None File "/app/.venv/lib/python3.12/site-packages/langgraph/pregel/_runner.py", line 304, in atick await arun_with_retry( └ <function arun_with_retry at 0x7f609f38bc40> File "/app/.venv/lib/python3.12/site-packages/langgraph/pregel/_retry.py", line 137, in arun_with_retry return await task.proc.ainvoke(task.input, config) │ │ │ │ └ {'metadata': {'outline_provider': 'openai-compatible', 'outline_model': 'qwen3-14b-mlx', 'transcript_provider': 'openai-compa... │ │ │ └ <member 'input' of 'PregelExecutableTask' objects> │ │ └ PregelExecutableTask(name='generate_outline', input={'content': 'Notebook: Ecopi-benchmark-test\n{\n "sources": [\n {\n ... │ └ <member 'proc' of 'PregelExecutableTask' objects> └ PregelExecutableTask(name='generate_outline', input={'content': 'Notebook: Ecopi-benchmark-test\n{\n "sources": [\n {\n ... File "/app/.venv/lib/python3.12/site-packages/langgraph/_internal/_runnable.py", line 705, in ainvoke input = await asyncio.create_task( │ └ <function create_task at 0x7f60a139a520> └ <module 'asyncio' from '/usr/local/lib/python3.12/asyncio/init.py'> File "/app/.venv/lib/python3.12/site-packages/langgraph/_internal/_runnable.py", line 473, in ainvoke ret = await self.afunc(*args, **kwargs) │ │ │ └ {'config': {'metadata': {'outline_provider': 'openai-compatible', 'outline_model': 'qwen3-14b-mlx', 'transcript_provider': 'o... │ │ └ ({'content': 'Notebook: Ecopi-benchmark-test\n{\n "sources": [\n {\n "id": "source:vfdbw74fcbf1m20qxvk2",\n "ti... │ └ <function generate_outline_node at 0x7f6090cebb00> └ generate_outline(tags=None, recurse=True, explode_args=False, func_accepts={'config': ('N/A', <class 'inspect._empty'>)}) File "/app/.venv/lib/python3.12/site-packages/podcast_creator/nodes.py", line 49, in generate_outline_node outline_preview = await outline_model.ainvoke(outline_prompt_text) │ │ └ 'You are an AI assistant specialized in creating podcast outlines. Your task is to create a detailed outline for a podcast ep... │ └ <function BaseChatModel.ainvoke at 0x7f608da23740> └ ChatOpenAI(client=<openai.resources.chat.completions.completions.Completions object at 0x7f608aa12480>, async_client=<openai.... File "/app/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 402, in ainvoke llm_result = await self.agenerate_prompt( │ └ <function BaseChatModel.agenerate_prompt at 0x7f608da23f60> └ ChatOpenAI(client=<openai.resources.chat.completions.completions.Completions object at 0x7f608aa12480>, async_client=<openai.... File "/app/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 1099, in agenerate_prompt return await self.agenerate( │ └ <function BaseChatModel.agenerate at 0x7f608da23e20> └ ChatOpenAI(client=<openai.resources.chat.completions.completions.Completions object at 0x7f608aa12480>, async_client=<openai.... File "/app/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 1057, in agenerate raise exceptions[0] └ [BadRequestError('Error code: 400 - {'error': "'response_format.type' must be 'json_schema' or 'text'"}')] File "/app/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 1310, in _agenerate_with_cache result = await self._agenerate( │ └ <function BaseChatOpenAI._agenerate at 0x7f608b524400> └ ChatOpenAI(client=<openai.resources.chat.completions.completions.Completions object at 0x7f608aa12480>, async_client=<openai.... File "/app/.venv/lib/python3.12/site-packages/langchain_openai/chat_models/base.py", line 1546, in _agenerate raise e File "/app/.venv/lib/python3.12/site-packages/langchain_openai/chat_models/base.py", line 1514, in _agenerate _handle_openai_bad_request(e) └ <function _handle_openai_bad_request at 0x7f608b5240e0> File "/app/.venv/lib/python3.12/site-packages/langchain_openai/chat_models/base.py", line 1509, in _agenerate raw_response = await self.root_async_client.chat.completions.with_raw_response.parse( # noqa: E501 │ │ │ │ │ └ <function AsyncCompletions.parse at 0x7f6081ec0400> │ │ │ │ └ <openai.resources.chat.completions.completions.AsyncCompletionsWithRawResponse object at 0x7f608aad4d10> │ │ │ └ <openai.resources.chat.completions.completions.AsyncCompletions object at 0x7f6089b94a40> │ │ └ <openai.resources.chat.chat.AsyncChat object at 0x7f6089b95760> │ └ <openai.AsyncOpenAI object at 0x7f608aa10830> └ ChatOpenAI(client=<openai.resources.chat.completions.completions.Completions object at 0x7f608aa12480>, async_client=<openai.... File "/app/.venv/lib/python3.12/site-packages/openai/_legacy_response.py", line 381, in wrapped return cast(LegacyAPIResponse[R], await func(*args, **kwargs)) │ │ │ │ │ └ {'model': 'qwen3-14b-mlx', 'top_p': 0.9, 'temperature': 1.0, 'response_format': {'type': 'json_object'}, 'max_completion_toke... │ │ │ │ └ () │ │ │ └ <bound method AsyncCompletions.parse of <openai.resources.chat.completions.completions.AsyncCompletions object at 0x7f6089b94... │ │ └ ~R │ └ <class 'openai._legacy_response.LegacyAPIResponse'> └ <function cast at 0x7f60a2306ca0> File "/app/.venv/lib/python3.12/site-packages/openai/resources/chat/completions/completions.py", line 1630, in parse return await self._post( │ └ <bound method AsyncAPIClient.post of <openai.AsyncOpenAI object at 0x7f608aa10830>> └ <openai.resources.chat.completions.completions.AsyncCompletions object at 0x7f6089b94a40> File "/app/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1794, in post return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls) │ │ │ │ │ └ None │ │ │ │ └ False │ │ │ └ FinalRequestOptions(method='post', url='/chat/completions', params={}, headers={'X-Stainless-Helper-Method': 'chat.completion... │ │ └ <class 'openai.types.chat.chat_completion.ChatCompletion'> │ └ <function AsyncAPIClient.request at 0x7f609ead84a0> └ <openai.AsyncOpenAI object at 0x7f608aa10830> File "/app/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1594, in request raise self._make_status_error_from_response(err.response) from None │ └ <function BaseClient._make_status_error_from_response at 0x7f609ead5620> └ <openai.AsyncOpenAI object at 0x7f608aa10830>
openai.BadRequestError: Error code: 400 - {'error': "'response_format.type' must be 'json_schema' or 'text'"}
This seems to be an issue on how LM Studio accepts json output parsing. We'll take a look into it.