ag2 icon indicating copy to clipboard operation
ag2 copied to clipboard

[Bug]: Input should be a valid string

Open derek-kam opened this issue 7 months ago • 3 comments

Describe the bug

I am trying to follow this 2 agent chat use case in this link: https://docs.ag2.ai/latest/docs/use-cases/notebooks/notebooks/run_and_event_processing/

I got this error:

ValidationError: 1 validation error for RunCompletionEvent summary Input should be a valid string [type=string_type, input_value={'content': 'The conversa...one, 'tool_calls': None}, input_type=dict] For further information visit https://errors.pydantic.dev/2.8/v/string_type

Steps to reproduce

Copy the exact code from this link: https://docs.ag2.ai/latest/docs/use-cases/notebooks/notebooks/run_and_event_processing/, just change the llm config to use Ollama.

Chat between two comedian agents

1. Import our agent class

from autogen import ConversableAgent, LLMConfig
from autogen.io.run_response import Cost, RunResponseProtocol
2. Define our LLM configuration for OpenAI's GPT-4o mini,
#    uses the OPENAI_API_KEY environment variable
# llm_config = LLMConfig(api_type="openai", model="gpt-4o-mini")
llm_config = LLMConfig(
        # Let's choose the Meta's Llama 3.1 model (model names must match Ollama exactly)
        model="llama3.1:8b",
        # We specify the API Type as 'ollama' so it uses the Ollama client class
        api_type="ollama",
        stream=False,
        client_host="http://localhost:11434",
)
print(f"Using LLM: {llm_config}")

3. Create our agents who will tell each other jokes,

#    with Jack ending the chat when Emma says FINISH
with llm_config:
    jack = ConversableAgent(
        "Jack",
        system_message=("Your name is Jack and you are a comedian in a two-person comedy show."),
        is_termination_msg=lambda x: "FINISH" in x["content"],
    )
    emma = ConversableAgent(
        "Emma",
        system_message=(
            "Your name is Emma and you are a comedian "
            "in a two-person comedy show. Say the word FINISH "
            "ONLY AFTER you've heard 2 of Jack's jokes."
        ),
    )

4. Run the chat

response: RunResponseProtocol = jack.run(
    emma, message="Emma, tell me a joke about goldfish and peanut butter.", summary_method="reflection_with_llm"
)

for event in response.events:
    print(event)

    if event.type == "input_request":
        event.content.respond("exit")

print(f"{response.summary=}")
print(f"{response.messages=}")
print(f"{response.events=}")
print(f"{response.context_variables=}")
print(f"{response.last_speaker=}")
print(f"{response.cost=}")
assert response.last_speaker in ["Jack", "Emma"], "Last speaker should be one of the agents"
assert isinstance(response.cost, Cost)

derek-kam avatar May 09 '25 04:05 derek-kam

Hi @derek-kam,

Thanks for reporting this issue! Could you please try with a different model (larger Llama model or one of the OpenAI models as shown in our original documentation example) to help us narrow down whether this is model-specific or a more general issue?

harishmohanraj avatar May 09 '25 04:05 harishmohanraj

@harishmohanraj I tried Cohere command-r, I got the same error.

derek-kam avatar May 09 '25 04:05 derek-kam

@derek-kam, thank you for the update. I will tag it for further attention.

harishmohanraj avatar May 09 '25 04:05 harishmohanraj