autogen icon indicating copy to clipboard operation
autogen copied to clipboard

[Bug]: Response from calling tool (call_xxxxx) ***** Error: Object of type Book is not JSON serializable

Open SuMiaoALi opened this issue 1 year ago • 4 comments

Describe the bug

In my test demo, when a tool executor agent execute a tool func in groupchat. The func return a list[Book]. Here is the code:

def list_books() -> Annotated[List[Book], '返回所有可用图书']:
    """
    函数:list_books
    描述:获取所有图书列表。
    返回值:List[Book] - 包含所有图书的列表。
    异常:
        HTTPError: 如果请求失败则抛出HTTP错误。
    """
    response = requests.get(f"{base_url}/books")
    response.raise_for_status()
    return [Book(**book) for book in response.json()]

func caller and func executor:


class Book(BaseModel):
    """
    类:Book (图书模型)
    描述:表示一本书的详细信息。
    """
    book_id: Annotated[str, Field(..., description="图书唯一标识符")]
    title: Annotated[str, Field(description="图书标题")]
    author: Annotated[str, Field(description="图书作者")]
    publisher: Annotated[str, Field(description="出版商")]
    isbn: Annotated[Optional[str], Field(None, description="ISBN编号")]
    price: Annotated[float, Field(description="图书价格")]

base_url = 'http://127.0.0.1:8001'


def list_books() -> Annotated[List[Book], '返回所有可用图书']:
    """
    函数:list_books
    描述:获取所有图书列表。
    返回值:List[Book] - 包含所有图书的列表。
    异常:
        HTTPError: 如果请求失败则抛出HTTP错误。
    """
    response = requests.get(f"{base_url}/books")
    response.raise_for_status()
    return [Book(**book) for book in response.json()]


def get_book(book_id: Annotated[str, '图书唯一标识符']) -> Annotated[Book, '图书信息']:
    """
    函数:get_book
    描述:根据图书ID获取图书详细信息。
    参数:
        book_id (int): 图书的唯一标识符。
    返回值:Book - 对应图书的详细信息。
    异常:
        HTTPError: 如果请求失败则抛出HTTP错误。
    """
    response = requests.get(f"{base_url}/books/{book_id}")
    response.raise_for_status()
    return Book(**response.json())


def create_book(book_data: Annotated[Book, '创建book对象请求参数']) -> Annotated[Book, '创建完成的图书信息']:
    """
    函数:create_book
    描述:创建新的图书记录。
    参数:
        book_data (Book): 新创建的图书数据。
    返回值:Book - 创建后的图书对象。
    异常:
        HTTPError: 如果请求失败则抛出HTTP错误。
    """
    response = requests.post(f"{base_url}/books/create", json=book_data.model_dump())
    response.raise_for_status()
    return Book(**response.json())


def delete_book(book_id: Annotated[str, 'book的id']) -> Annotated[Book, '删除完成的图书信息']:
    """
    函数:delete_book
    描述:删除图书。
    参数:
        book_id (int): 图书的id。
    返回值:Book - 删除成功的图书对象。
    异常:
        HTTPError: 如果请求失败则抛出HTTP错误。
    """
    response = requests.post(f"{base_url}/books/del/{book_id}")
    response.raise_for_status()
    return Book(**response.json())

__tools = [
    {
        'func': list_books,
        'name': list_books.__name__,
        'description': 'Method for list all books'
    },
    {
        'func': get_book,
        'name': get_book.__name__,
        'description': 'Method for get one book by book_id'
    },
    {
        'func': create_book,
        'name': create_book.__name__,
        'description': 'Create and save a book'
    },
    {
        'func': delete_book,
        'name': delete_book.__name__,
        'description': 'Delete a book'
    },
]


def do_register_for_llm(llm_agent: ConversableAgent):
    if llm_agent.llm_config:
        for tool in __tools:
            llm_agent.register_for_llm(
                name=tool['name'],
                description=tool['description']
            )(tool['func'])


def do_register_for_execution(executor_agent: ConversableAgent):
    for tool in __tools:
        executor_agent.register_for_execution(
            name=tool['name'],
        )(tool['func'])


tool_executor: ConversableAgent = ConversableAgent(
    name='book_service_tool_executor',
    description="function executor for book service",
    system_message='你是一个函数执行器,负责图书模块的函数执行。你需要明确地根据调用者的指令来调用对应函数并正确填入对应参数,正确返回函数的执行结果。',
    llm_config=False,
    human_input_mode="NEVER",
    default_auto_reply="请选择合适的函数进行调用,并告诉我函数参数。"
)
do_register_for_execution(tool_executor)


book_brain_agent = ConversableAgent(
    name='book_brain',
    description="book_service_caller",
    llm_config=llm_config,
    human_input_mode="NEVER",
    system_message="""
    你是图书服务模块的管理者,负责解决用户关于图书模块的所有问题。\n
    你需要根据用户的请求,经过你的拆解分析,来决定下一步的行动是什么,你的可选行动为: \n
    1.根据提供的工具,选择调用合适的工具,并解析传入合适的参数,来完成用户的请求。\n
    2.你可以写一段python代码交给code interpreter来执行。\n
    请先给出你的执行计划,例如:\n
    '接下来,我会执行以下调用:
    1、先查询图书列表,找到可用的图书
    2、查询该图书的详细信息' \n
    ......
    你需要根据工具或代码的执行结果,来判断你的决策的正确性;如果不正确,则修改你的决策继续执行。\n
    """,
)
book_client.do_register_for_llm(book_brain_agent)

I had done these funcs unit test, they were all OK.

When init a group chat and execute the list_books(), it always occurred the error but other func is OK.

The list_books() differs with others only the return value, while it's return value is list[Book] and others is Book.

Everytime works OK until the executor execute list_books function.

The error msg infos that the Book Model is not JSON serializable but other tool func works correctly

user (to book module QA chat manager):

查询id=1的书

--------------------------------------------------------------------------------

Next speaker: book_brain

book_brain (to book module QA chat manager):

***** Suggested tool call (call_QBmjKoC0UqSywGbRDR2XgMq1): get_book *****
Arguments: 
{"book_id":"1"}
*************************************************************************

--------------------------------------------------------------------------------

Next speaker: book_service_tool_executor


>>>>>>>> EXECUTING FUNCTION get_book...
book_service_tool_executor (to book module QA chat manager):

book_service_tool_executor (to book module QA chat manager):

***** Response from calling tool (call_QBmjKoC0UqSywGbRDR2XgMq1) *****
{"book_id":"1","title":"水浒传","author":"未知","publisher":"人民出版社","isbn":null,"price":100.0}
**********************************************************************

--------------------------------------------------------------------------------

Next speaker: book_brain

book_brain (to book module QA chat manager):

查询结果如下:

- 图书唯一标识符 (book_id):1
- 标题 (title):水浒传
- 作者 (author):未知
- 出版社 (publisher):人民出版社
- ISBN编号 (isbn):无
- 价格 (price):100元

请问还有什么其他需要帮助的吗?
Next speaker: book_brain

book_brain (to book module QA chat manager):

在尝试获取图书列表时,再次遇到问题。我会尝试不同的方法来解决这个问题。

首先,我会尝试请求获取图书列表的操作。

### 执行计划
1. 再次尝试调用`list_books`函数,获取图书列表。

接下来,我会尝试再次调用获取图书列表功能。
***** Suggested tool call (call_p5UCUC1LbDVzBnsv0w4rS3Bw): list_books *****
Arguments: 
{}
***************************************************************************

--------------------------------------------------------------------------------

Next speaker: book_service_tool_executor


>>>>>>>> EXECUTING FUNCTION list_books...
book_service_tool_executor (to book module QA chat manager):

book_service_tool_executor (to book module QA chat manager):

***** Response from calling tool (call_p5UCUC1LbDVzBnsv0w4rS3Bw) *****
Error: Object of type Book is not JSON serializable
**********************************************************************

Please help me soon. Thanks very much!

Steps to reproduce

No response

Model Used

gpt-4o

Expected Behavior

That works correctly

Screenshots and logs

No response

Additional Information

AutoGen Version: v0.2.35 Operating System: Windows 11: Python Version: 3.11.9: Related Issues: 相关问题: Any other relevant information.

SuMiaoALi avatar Aug 28 '24 10:08 SuMiaoALi

We have encountered a similar issue in Autogen Studio version 0.4. For us, it is TextBlock is not JSON serializable The component integration tests with AssistantAgent pass and work fine. Only when using the component in Autogen Studio in a team with GroupChat does the issue come up. adding a try except block around the failing logging statement (line 33 in teammanager.py) makes everything just work normally except for the ToolCallExecutionResult message being passed to LLMCallEventMessage for logging. I am curious if we should submit a PR with a potential fix

oogetyboogety avatar Apr 07 '25 04:04 oogetyboogety

@oogetyboogety, The error you are seeing is related to a recent release on autogen-agentchat. We plan a release on autogenstudio within the next two days that will handle this.

To confirm, this is related to Anthropic Models and fixed here right? https://github.com/microsoft/autogen/pull/6135

victordibia avatar Apr 07 '25 04:04 victordibia

@oogetyboogety, The error you are seeing is related to a recent release on autogen-agentchat. We plan a release on autogenstudio within the next two days that will handle this.

To confirm, this is related to Anthropic Models and fixed here right? #6135

Indeed, that's the fix. Excellent, thanks! Will pull the new release as soon as it's available.

oogetyboogety avatar Apr 07 '25 23:04 oogetyboogety

@oogetyboogety .. released in v0.4.2.1 https://pypi.org/project/autogenstudio/0.4.2.1/

Let me know if you have any issues.

victordibia avatar Apr 08 '25 21:04 victordibia