autogen icon indicating copy to clipboard operation
autogen copied to clipboard

Unable to get Structured Response with Agentchat Swarm and AzureOpenAI gpt models

Open sharanyabhat opened this issue 7 months ago • 5 comments

What happened?

I have a Swarm implementation with 5 agents using AzureOpenAIChatCompletionClient. I have configured structured output via a pydantic class passed with output_content_type in the Assistant Agent configs When I try to fetch the response using task_result = await Console(team.run_stream(task=query)) I see the below failure:

Traceback (most recent call last):
  File "/Users/sharanyab/my_project/.venv/lib/python3.11/site-packages/autogen_core/_single_threaded_agent_runtime.py", line 533, in _on_message
    return await agent.on_message(
           ^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/sharanyab/my_project/.venv/lib/python3.11/site-packages/autogen_core/_base_agent.py", line 113, in on_message
    return await self.on_message_impl(message, ctx)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/sharanyab/my_project/.venv/lib/python3.11/site-packages/autogen_agentchat/teams/_group_chat/_sequential_routed_agent.py", line 67, in on_message_impl
    return await super().on_message_impl(message, ctx)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/sharanyab/my_project/.venv/lib/python3.11/site-packages/autogen_core/_routed_agent.py", line 485, in on_message_impl
    return await h(self, message, ctx)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/sharanyab/my_project/.venv/lib/python3.11/site-packages/autogen_core/_routed_agent.py", line 268, in wrapper
    return_value = await func(self, message, ctx)  # type: ignore
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/sharanyab/my_project/.venv/lib/python3.11/site-packages/autogen_agentchat/teams/_group_chat/_chat_agent_container.py", line 79, in handle_request
    async for msg in self._agent.on_messages_stream(self._message_buffer, ctx.cancellation_token):
  File "/Users/sharanyab/my_project/.venv/lib/python3.11/site-packages/autogen_agentchat/agents/_assistant_agent.py", line 827, in on_messages_stream
    async for inference_output in self._call_llm(
  File "/Users/sharanyab/my_project/.venv/lib/python3.11/site-packages/autogen_agentchat/agents/_assistant_agent.py", line 939, in _call_llm
    async for chunk in model_client.create_stream(
  File "/Users/sharanyab/my_project/.venv/lib/python3.11/site-packages/autogen_ext/models/openai/_openai_client.py", line 811, in create_stream
    async for chunk in chunks:
  File "/Users/sharanyab/my_project/.venv/lib/python3.11/site-packages/autogen_ext/models/openai/_openai_client.py", line 1032, in _create_stream_chunks_beta_client
    event = await event_future
            ^^^^^^^^^^^^^^^^^^
  File "/Users/sharanyab/my_project/.venv/lib/python3.11/site-packages/openai/lib/streaming/chat/_completions.py", line 192, in __anext__
    return await self._iterator.__anext__()
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/sharanyab/my_project/.venv/lib/python3.11/site-packages/openai/lib/streaming/chat/_completions.py", line 241, in __stream__
    events_to_fire = self._state.handle_chunk(sse_event)
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/sharanyab/my_project/.venv/lib/python3.11/site-packages/openai/lib/streaming/chat/_completions.py", line 348, in handle_chunk
    return self._build_events(
           ^^^^^^^^^^^^^^^^^^^
  File "/Users/sharanyab/my_project/.venv/lib/python3.11/site-packages/openai/lib/streaming/chat/_completions.py", line 576, in _build_events
    choice_state.get_done_events(
  File "/Users/sharanyab/my_project/.venv/lib/python3.11/site-packages/openai/lib/streaming/chat/_completions.py", line 608, in get_done_events
    self._content_done_events(choice_snapshot=choice_snapshot, response_format=response_format)
  File "/Users/sharanyab/my_project/.venv/lib/python3.11/site-packages/openai/lib/streaming/chat/_completions.py", line 649, in _content_done_events
    parsed = maybe_parse_content(
             ^^^^^^^^^^^^^^^^^^^^
  File "/Users/sharanyab/my_project/.venv/lib/python3.11/site-packages/openai/lib/_parsing/_completions.py", line 161, in maybe_parse_content
    return _parse_content(response_format, message.content)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/sharanyab/my_project/.venv/lib/python3.11/site-packages/openai/lib/_parsing/_completions.py", line 221, in _parse_content
    return cast(ResponseFormatT, model_parse_json(response_format, content))
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/sharanyab/my_project/.venv/lib/python3.11/site-packages/openai/_compat.py", line 169, in model_parse_json
    return model.model_validate_json(data)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/sharanyab/my_project/.venv/lib/python3.11/site-packages/pydantic/main.py", line 744, in model_validate_json
    return cls.__pydantic_validator__.validate_json(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pydantic_core._pydantic_core.ValidationError: 1 validation error for MyResponse
  Invalid JSON: trailing characters at line 2 column 1 [type=json_invalid, input_value='{"responses":[{"Markdown... - **Answers**: None"}]}', input_type=str]
    For further information visit https://errors.pydantic.dev/2.11/v/json_invalid
ERROR:autogen_core:Error processing publish message for agent1_agent_71cba8a1-71b6-49b6-810a-426a80fdad39/71cba8a1-71b6-49b6-810a-426a80fdad39
Traceback (most recent call last):
  File "/Users/sharanyab/my_project/.venv/lib/python3.11/site-packages/autogen_core/_single_threaded_agent_runtime.py", line 533, in _on_message
    return await agent.on_message(
           ^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/sharanyab/my_project/.venv/lib/python3.11/site-packages/autogen_core/_base_agent.py", line 113, in on_message
    return await self.on_message_impl(message, ctx)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/sharanyab/my_project/.venv/lib/python3.11/site-packages/autogen_agentchat/teams/_group_chat/_sequential_routed_agent.py", line 72, in on_message_impl
    return await super().on_message_impl(message, ctx)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/sharanyab/my_project/.venv/lib/python3.11/site-packages/autogen_core/_routed_agent.py", line 486, in on_message_impl
    return await self.on_unhandled_message(message, ctx)  # type: ignore
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/sharanyab/my_project/.venv/lib/python3.11/site-packages/autogen_agentchat/teams/_group_chat/_chat_agent_container.py", line 133, in on_unhandled_message
    raise ValueError(f"Unhandled message in agent container: {type(message)}")
ValueError: Unhandled message in agent container: <class 'autogen_agentchat.teams._group_chat._events.GroupChatError'>
ERROR:autogen_core:Error processing publish message for agent3_agent_71cba8a1-71b6-49b6-810a-426a80fdad39/71cba8a1-71b6-49b6-810a-426a80fdad39
Traceback (most recent call last):
  File "/Users/sharanyab/my_project/.venv/lib/python3.11/site-packages/autogen_core/_single_threaded_agent_runtime.py", line 533, in _on_message
    return await agent.on_message(
           ^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/sharanyab/my_project/.venv/lib/python3.11/site-packages/autogen_core/_base_agent.py", line 113, in on_message
    return await self.on_message_impl(message, ctx)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/sharanyab/my_project/.venv/lib/python3.11/site-packages/autogen_agentchat/teams/_group_chat/_sequential_routed_agent.py", line 72, in on_message_impl
    return await super().on_message_impl(message, ctx)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/sharanyab/my_project/.venv/lib/python3.11/site-packages/autogen_core/_routed_agent.py", line 486, in on_message_impl
    return await self.on_unhandled_message(message, ctx)  # type: ignore
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/sharanyab/my_project/.venv/lib/python3.11/site-packages/autogen_agentchat/teams/_group_chat/_chat_agent_container.py", line 133, in on_unhandled_message
    raise ValueError(f"Unhandled message in agent container: {type(message)}")
ValueError: Unhandled message in agent container: <class 'autogen_agentchat.teams._group_chat._events.GroupChatError'>
ERROR:autogen_core:Error processing publish message for agent2_agent_71cba8a1-71b6-49b6-810a-426a80fdad39/71cba8a1-71b6-49b6-810a-426a80fdad39
Traceback (most recent call last):
  File "/Users/sharanyab/my_project/.venv/lib/python3.11/site-packages/autogen_core/_single_threaded_agent_runtime.py", line 533, in _on_message
    return await agent.on_message(
           ^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/sharanyab/my_project/.venv/lib/python3.11/site-packages/autogen_core/_base_agent.py", line 113, in on_message
    return await self.on_message_impl(message, ctx)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/sharanyab/my_project/.venv/lib/python3.11/site-packages/autogen_agentchat/teams/_group_chat/_sequential_routed_agent.py", line 72, in on_message_impl
    return await super().on_message_impl(message, ctx)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/sharanyab/my_project/.venv/lib/python3.11/site-packages/autogen_core/_routed_agent.py", line 486, in on_message_impl
    return await self.on_unhandled_message(message, ctx)  # type: ignore
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/sharanyab/my_project/.venv/lib/python3.11/site-packages/autogen_agentchat/teams/_group_chat/_chat_agent_container.py", line 133, in on_unhandled_message
    raise ValueError(f"Unhandled message in agent container: {type(message)}")
ValueError: Unhandled message in agent container: <class 'autogen_agentchat.teams._group_chat._events.GroupChatError'>
ERROR:autogen_core:Error processing publish message for agent4_agent_71cba8a1-71b6-49b6-810a-426a80fdad39/71cba8a1-71b6-49b6-810a-426a80fdad39
Traceback (most recent call last):
  File "/Users/sharanyab/my_project/.venv/lib/python3.11/site-packages/autogen_core/_single_threaded_agent_runtime.py", line 533, in _on_message
    return await agent.on_message(
           ^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/sharanyab/my_project/.venv/lib/python3.11/site-packages/autogen_core/_base_agent.py", line 113, in on_message
    return await self.on_message_impl(message, ctx)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/sharanyab/my_project/.venv/lib/python3.11/site-packages/autogen_agentchat/teams/_group_chat/_sequential_routed_agent.py", line 72, in on_message_impl
    return await super().on_message_impl(message, ctx)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/sharanyab/my_project/.venv/lib/python3.11/site-packages/autogen_core/_routed_agent.py", line 486, in on_message_impl
    return await self.on_unhandled_message(message, ctx)  # type: ignore
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/sharanyab/my_project/.venv/lib/python3.11/site-packages/autogen_agentchat/teams/_group_chat/_chat_agent_container.py", line 133, in on_unhandled_message
    raise ValueError(f"Unhandled message in agent container: {type(message)}")
ValueError: Unhandled message in agent container: <class 'autogen_agentchat.teams._group_chat._events.GroupChatError'>
ERROR:llm.api:Unable to process the query 'list devices' for org_id:bdd8043c-c38f-402d-8711-163502d1fecd, chat_id:9ceb2a8c-4250-11f0-9e0e-22925462b1c9 err:ValidationError: 1 validation error for MyResponse
  Invalid JSON: trailing characters at line 2 column 1 [type=json_invalid, input_value='{"responses":[{"Markdown... - **Answers**: None"}]}', input_type=str]
    For further information visit https://errors.pydantic.dev/2.11/v/json_invalid
Traceback:
Traceback (most recent call last):
  File "/Users/sharanyab/my_project/.venv/lib/python3.11/site-packages/autogen_agentchat/teams/_group_chat/_chat_agent_container.py", line 79, in handle_request
    async for msg in self._agent.on_messages_stream(self._message_buffer, ctx.cancellation_token):
  File "/Users/sharanyab/my_project/.venv/lib/python3.11/site-packages/autogen_agentchat/agents/_assistant_agent.py", line 827, in on_messages_stream
    async for inference_output in self._call_llm(
  File "/Users/sharanyab/my_project/.venv/lib/python3.11/site-packages/autogen_agentchat/agents/_assistant_agent.py", line 939, in _call_llm
    async for chunk in model_client.create_stream(
  File "/Users/sharanyab/my_project/.venv/lib/python3.11/site-packages/autogen_ext/models/openai/_openai_client.py", line 811, in create_stream
    async for chunk in chunks:
  File "/Users/sharanyab/my_project/.venv/lib/python3.11/site-packages/autogen_ext/models/openai/_openai_client.py", line 1032, in _create_stream_chunks_beta_client
    event = await event_future
            ^^^^^^^^^^^^^^^^^^
  File "/Users/sharanyab/my_project/.venv/lib/python3.11/site-packages/openai/lib/streaming/chat/_completions.py", line 192, in __anext__
    return await self._iterator.__anext__()
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/sharanyab/my_project/.venv/lib/python3.11/site-packages/openai/lib/streaming/chat/_completions.py", line 241, in __stream__
    events_to_fire = self._state.handle_chunk(sse_event)
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/sharanyab/my_project/.venv/lib/python3.11/site-packages/openai/lib/streaming/chat/_completions.py", line 348, in handle_chunk
    return self._build_events(
           ^^^^^^^^^^^^^^^^^^^
  File "/Users/sharanyab/my_project/.venv/lib/python3.11/site-packages/openai/lib/streaming/chat/_completions.py", line 576, in _build_events
    choice_state.get_done_events(
  File "/Users/sharanyab/my_project/.venv/lib/python3.11/site-packages/openai/lib/streaming/chat/_completions.py", line 608, in get_done_events
    self._content_done_events(choice_snapshot=choice_snapshot, response_format=response_format)
  File "/Users/sharanyab/my_project/.venv/lib/python3.11/site-packages/openai/lib/streaming/chat/_completions.py", line 649, in _content_done_events
    parsed = maybe_parse_content(
             ^^^^^^^^^^^^^^^^^^^^
  File "/Users/sharanyab/my_project/.venv/lib/python3.11/site-packages/openai/lib/_parsing/_completions.py", line 161, in maybe_parse_content
    return _parse_content(response_format, message.content)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/sharanyab/my_project/.venv/lib/python3.11/site-packages/openai/lib/_parsing/_completions.py", line 221, in _parse_content
    return cast(ResponseFormatT, model_parse_json(response_format, content))
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/sharanyab/my_project/.venv/lib/python3.11/site-packages/openai/_compat.py", line 169, in model_parse_json
    return model.model_validate_json(data)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/sharanyab/my_project/.venv/lib/python3.11/site-packages/pydantic/main.py", line 744, in model_validate_json
    return cls.__pydantic_validator__.validate_json(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pydantic_core._pydantic_core.ValidationError: 1 validation error for MyResponse
  Invalid JSON: trailing characters at line 2 column 1 [type=json_invalid, input_value='{"responses":[{"Markdown... - **Answers**: None"}]}', input_type=str]
    For further information visit https://errors.pydantic.dev/2.11/v/json_invalid
,
 stack trace:Traceback (most recent call last):
  File "/Users/sharanyab/my_project/llm/api.py", line 94, in chat
    response, input_tokens, output_tokens, model_name = get_llm_chat_response(org_id=str(org_id), query=chat.query,
                                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/sharanyab/my_project/llm/api_functions.py", line 51, in get_llm_chat_response
    resp, input_tokens, output_tokens, model_name = chat(query=query, messages=messages, temperature=temperature,
                                                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/sharanyab/my_project/utils/lm_calls/main.py", line 284, in chat
    response, input_tokens, output_tokens, model_id = asyncio.run(get_response_from_autogen(
                                                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/runners.py", line 190, in run
    return runner.run(main)
           ^^^^^^^^^^^^^^^^
  File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/runners.py", line 118, in run
    return self._loop.run_until_complete(task)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/base_events.py", line 654, in run_until_complete
    return future.result()
           ^^^^^^^^^^^^^^^
  File "/Users/sharanyab/my_project/utils/lm_calls/autogen/autogen.py", line 98, in get_response_from_autogen
    json_resp, tool_calls, output_tokens, input_tokens = await (
                                                         ^^^^^^^
  File "/Users/sharanyab/my_project/utils/lm_calls/autogen/autogen.py", line 37, in run_team_stream
    task_result = await Console(team.run_stream(task=query))
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/sharanyab/my_project/.venv/lib/python3.11/site-packages/autogen_agentchat/ui/_console.py", line 117, in Console
    async for message in stream:
  File "/Users/sharanyab/my_project/.venv/lib/python3.11/site-packages/autogen_agentchat/teams/_group_chat/_base_group_chat.py", line 518, in run_stream
    raise RuntimeError(str(message.error))
RuntimeError: ValidationError: 1 validation error for MyResponse
  Invalid JSON: trailing characters at line 2 column 1 [type=json_invalid, input_value='{"responses":[{"Markdown... - **Answers**: None"}]}', input_type=str]
    For further information visit https://errors.pydantic.dev/2.11/v/json_invalid
Traceback:
Traceback (most recent call last):
  File "/Users/sharanyab/my_project/.venv/lib/python3.11/site-packages/autogen_agentchat/teams/_group_chat/_chat_agent_container.py", line 79, in handle_request
    async for msg in self._agent.on_messages_stream(self._message_buffer, ctx.cancellation_token):
  File "/Users/sharanyab/my_project/.venv/lib/python3.11/site-packages/autogen_agentchat/agents/_assistant_agent.py", line 827, in on_messages_stream
    async for inference_output in self._call_llm(
  File "/Users/sharanyab/my_project/.venv/lib/python3.11/site-packages/autogen_agentchat/agents/_assistant_agent.py", line 939, in _call_llm
    async for chunk in model_client.create_stream(
  File "/Users/sharanyab/my_project/.venv/lib/python3.11/site-packages/autogen_ext/models/openai/_openai_client.py", line 811, in create_stream
    async for chunk in chunks:
  File "/Users/sharanyab/my_project/.venv/lib/python3.11/site-packages/autogen_ext/models/openai/_openai_client.py", line 1032, in _create_stream_chunks_beta_client
    event = await event_future
            ^^^^^^^^^^^^^^^^^^
  File "/Users/sharanyab/my_project/.venv/lib/python3.11/site-packages/openai/lib/streaming/chat/_completions.py", line 192, in __anext__
    return await self._iterator.__anext__()
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/sharanyab/my_project/.venv/lib/python3.11/site-packages/openai/lib/streaming/chat/_completions.py", line 241, in __stream__
    events_to_fire = self._state.handle_chunk(sse_event)
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/sharanyab/my_project/.venv/lib/python3.11/site-packages/openai/lib/streaming/chat/_completions.py", line 348, in handle_chunk
    return self._build_events(
           ^^^^^^^^^^^^^^^^^^^
  File "/Users/sharanyab/my_project/.venv/lib/python3.11/site-packages/openai/lib/streaming/chat/_completions.py", line 576, in _build_events
    choice_state.get_done_events(
  File "/Users/sharanyab/my_project/.venv/lib/python3.11/site-packages/openai/lib/streaming/chat/_completions.py", line 608, in get_done_events
    self._content_done_events(choice_snapshot=choice_snapshot, response_format=response_format)
  File "/Users/sharanyab/my_project/.venv/lib/python3.11/site-packages/openai/lib/streaming/chat/_completions.py", line 649, in _content_done_events
    parsed = maybe_parse_content(
             ^^^^^^^^^^^^^^^^^^^^
  File "/Users/sharanyab/my_project/.venv/lib/python3.11/site-packages/openai/lib/_parsing/_completions.py", line 161, in maybe_parse_content
    return _parse_content(response_format, message.content)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/sharanyab/my_project/.venv/lib/python3.11/site-packages/openai/lib/_parsing/_completions.py", line 221, in _parse_content
    return cast(ResponseFormatT, model_parse_json(response_format, content))
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/sharanyab/my_project/.venv/lib/python3.11/site-packages/openai/_compat.py", line 169, in model_parse_json
    return model.model_validate_json(data)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/sharanyab/my_project/.venv/lib/python3.11/site-packages/pydantic/main.py", line 744, in model_validate_json
    return cls.__pydantic_validator__.validate_json(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pydantic_core._pydantic_core.ValidationError: 1 validation error for MyResponse
  Invalid JSON: trailing characters at line 2 column 1 [type=json_invalid, input_value='{"responses":[{"Markdown... - **Answers**: None"}]}', input_type=str]
    For further information visit https://errors.pydantic.dev/2.11/v/json_invalid

We are using AzureOpenAIChatCompletionClient :

common_args = {
    "timeout": int(os.getenv("LLM_TIMEOUT","500")),  # in seconds
    "stream_options":{"include_usage": True}
}

model_client = AzureOpenAIChatCompletionClient(
        azure_deployment=model_id,
        model=model_id,
        api_version=api_version,
        azure_endpoint=base_url,
        api_key=api_key,
        top_p=top_p,
        temperature=temperature,
        **common_args,

The Swarm configs is as below:

termination = HandoffTermination(target="user") | TextMentionTermination("TERMINATE")
    team = Swarm(
        participants = agents,
        termination_condition=termination,
        emit_team_events=True,
    )

The structured response model looks as below:

class MyResponse(BaseModel):
    class Response(BaseModel):
        Markdown: str

    responses: list[Response]

Agent configs are all smilar as below:


class HistoryMessages(ChatCompletionContext):
    """
    A simple implementation of ChatCompletionContext abstract class that stores messages in memory.
    """
    async def get_messages(self) -> List[LLMMessage]:
        """Retrieve all messages from the context."""
        return self._messages

common_args = {
        "model_client": model_client,
        "reflect_on_tool_use": False,
        "output_content_type": MyResponse,
        "model_client_stream": True,
        "model_context" : HistoryMessages(initial_messages=messages),
    }

agent1 = AssistantAgent(
        "agent1",
        handoffs=["agent2", "agent3", "agent4", "agent5"],
        system_message=AGENT1_SYSTEM_MESSAGE,
        **common_args,
    )

Randomly the workflow works, but most of the time I encounter this issue. Is this a known bug? Is there an issue or error in my implementation? The issue is not seen consistently, seen randomly but frequently. The issue is also seen in 0.5.7 as well, and more frequently. The issue looks to be similar to https://github.com/microsoft/autogen/issues/6480

Which packages was the bug in?

Python AgentChat (autogen-agentchat>=0.4.0)

AutoGen library version.

Autogen 0.5.6

Other library version.

autogen-agentchat==0.5.6 autogen-core==0.5.6 autogen-ext==0.5.6 azure-ai-inference==1.0.0b9 azure-ai-projects==1.0.0b10 azure-common==1.1.28 azure-core==1.34.0 azure-identity==1.21.0 azure-search-documents==11.5.2

Model used

gpt-4o

Model provider

Azure OpenAI

Other model provider

No response

Python version

Python 3.11

.NET version

None

Operating system

None

sharanyabhat avatar Jun 05 '25 22:06 sharanyabhat

What model are you using? It would also be helpful to see the rest of your agents, their prompts, and your query

peterychang avatar Jun 06 '25 15:06 peterychang

I've encountered a similar problem. I'm using google/gemini-2.5-pro provided by OpenRouter. However, when I switch to openai/gpt-4.1, it works fine. I am able to get the correct JSON when using langchain and crewai for the same task.

Schema

from typing import List

from pydantic import BaseModel, Field


class Stage(BaseModel):
	"""
	Represents a single stage in the overall plan, containing a subtask
	and a prompt for the agent that will execute it.
	"""

	page_knowledge: str = Field(
		description="The current state of knowledge about the page, if applicable."
	)
	task: str = Field(description="The specific subtask for this stage.")
	prompt: str = Field(description="The derived prompt for the Coder Agent.")


class LeaderOutput(BaseModel):
	"""
	Defines the structured output for the Leader Agent, which is a
	sequence of stages.
	"""

	intention: str = Field(description="The overall intention or goal of the user.")

	stages: List[Stage] = Field(
		description="An ordered list of stages to accomplish the user's goal."
	)

Error

    return cls.__pydantic_validator__.validate_json(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pydantic_core._pydantic_core.ValidationError: 3 validation errors for LeaderOutput
stages.0
  Input should be an object [type=model_type, input_value="...", input_type=str]
    For further information visit https://errors.pydantic.dev/2.11/v/model_type
stages.1
  Input should be an object [type=model_type, input_value='...', input_type=str]
    For further information visit https://errors.pydantic.dev/2.11/v/model_type
stages.2
  Input should be an object [type=model_type, input_value='...', input_type=str]
    For further information visit https://errors.pydantic.dev/2.11/v/model_type

miaobuao avatar Jun 26 '25 06:06 miaobuao

I am using gpt-4o and gpt-4.1 . I am hitting issue on both of these models.

Looks like the issue I am seeing is more related to https://community.openai.com/t/duplicated-structured-output-content/986766, when I debugged further I saw the json_data having duplicates of responses like '{"response": "my_response"}\n{"response": "my_response_part2"}', this is failing in the pydantic validators in pydantic/main.py

I am using: openai==1.88.0 autogen-agentchat==0.6.1 autogen-core==0.6.1 autogen-ext==0.6.1 pydantic-settings==2.4.0 pydantic==2.11.4 pydantic_core==2.33.2

Let me know if you need the snippets too.

sharanyabhat avatar Jul 01 '25 23:07 sharanyabhat

class AgentResponse(BaseModel):
    thoughts: Optional[str] = None
    result: Optional[str] = None

every works fine in GPT-41, but I encountered same error in GPT-5

    agent = AssistantAgent(
        name="cloud_snow_assistant",
        model_client=model_client,
        system_message=f"""You are a workflow assistant. You must strictly follow the instructions. Use tool for reflection and the work order status transition based on the comment history. 
                    ## Approach
                    Think through this step-by-step:

                    {common_status_rules_prompt}

                    ###
                    You MUST use tool to reflect your action, the input should be:
                    Action: <your action>
                    Latest User Comment: <latest user comments>

                    Once you get the APPROVE feedback from reflect_tool, you can USE the correct tool to take the action or output "RECEIVED PASS" when requires no tool from the previous thoughts immediately.
                    Otherwise, When you get REFUSE, you MUST rethink and use the reflect_tool for evaluation again
                    ###    
                    
                    Compare scenarios and determine the most robust approach
                '""",
        tools=[reflect_tool, close_tool, change_tool],
        output_content_type=AgentResponse,
        reflect_on_tool_use=True,
        max_tool_iterations=2,
    )

pydantic_core._pydantic_core.ValidationError: 1 validation error for AgentResponse Invalid JSON: trailing characters at line 2 column 1 [type=json_invalid, input_value='{"thoughts":"Proposing a...ow status to cloudops"}', input_type=str] For further information visit https://errors.pydantic.dev/2.11/v/json_invalid

toughnoah avatar Sep 18 '25 08:09 toughnoah

I've encountered a similar problem. I'm using google/gemini-2.5-flash provided by OpenRouter. However, when I switch to openai/gpt-4o-mini, it works fine.

class AgentResponse(BaseModel):
    content: str
    task_finished: bool
chat_client = OpenAIChatCompletionClient(
            model=model_str,
            temperature=temperature,
            base_url='https://openrouter.ai/api/v1',
            api_key=self.settings.OPENROUTER_API_KEY,
            model_info={
                'vision': model.is_vision_model(),
                'function_calling': True,
                'json_output': True,
                'family': 'unknown',
                'multiple_system_messages': True,
                'structured_output': True,
            },
        )

agent = AssistantAgent(
                name=agent_name,
                model_client=model,
                description=params.description,
                tools=tools,
                memory=[pg_memory],
                system_message=system_prompt,
                output_content_type=AgentResponse,
            )

Here is the system prompt i provided:

You are helpful assistant which can answer users questions in arabic, use search tool if required
Your output must follow this schema:
{
     'content':'<YOUR_ANSWER_HERE>'
     'task_finished': '<TRUE_WHEN_THE_TASK_FINISHED_WITH_THIS_RESPONSE>'
 }

The error:

pydantic_core._pydantic_core.ValidationError: 1 validation error for AgentResponse Invalid JSON: EOF while parsing a string at line 2 column 16 [type=json_invalid, input_value='{\n    "content": "', input_type=str] For further information visit https://errors.pydantic.dev/2.12/v/json_invalid

I am using: openai==1.109.1 autogen-agentchat==0.7.5 autogen-core==0.7.5 autogen-ext==0.7.5 pydantic-settings==2.11.0 pydantic==2.12.2 pydantic_core==2.41.4

ayhamakeed2000 avatar Oct 15 '25 13:10 ayhamakeed2000