langgraph icon indicating copy to clipboard operation
langgraph copied to clipboard

TypeError: Type is not msgpack serializable: ToolMessage

Open khteh opened this issue 7 months ago • 6 comments

025-06-11 19:33:06 ERROR    /invoke exception! Type is not msgpack serializable: Send, repr: TypeError('Type is not msgpack serializable: Send')
Traceback (most recent call last):
  File "/usr/src/Python/rag-agent/src/controllers/HomeController.py", line 139, in invoke
    async for step in current_app.agent.astream(
    ...<6 lines>...
        step["messages"][-1].pretty_print()
  File "/home/khteh/.local/share/virtualenvs/rag-agent-YeW3dxEa/lib/python3.13/site-packages/langgraph/pregel/__init__.py", line 2596, in astream
    async with AsyncPregelLoop(
               ~~~~~~~~~~~~~~~^
        input,
        ^^^^^^
    ...<20 lines>...
        cache_policy=self.cache_policy,
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    ) as loop:
    ^
  File "/home/khteh/.local/share/virtualenvs/rag-agent-YeW3dxEa/lib/python3.13/site-packages/langgraph/pregel/loop.py", line 1393, in __aexit__
    return await exit_task
           ^^^^^^^^^^^^^^^
  File "/usr/lib/python3.13/contextlib.py", line 768, in __aexit__
    raise exc
  File "/usr/lib/python3.13/contextlib.py", line 751, in __aexit__
    cb_suppress = await cb(*exc_details)
                  ^^^^^^^^^^^^^^^^^^^^^^
  File "/home/khteh/.local/share/virtualenvs/rag-agent-YeW3dxEa/lib/python3.13/site-packages/langgraph/pregel/executor.py", line 209, in __aexit__
    raise exc
  File "/home/khteh/.local/share/virtualenvs/rag-agent-YeW3dxEa/lib/python3.13/site-packages/langgraph/checkpoint/postgres/aio.py", line 305, in aput_writes
    params = await asyncio.to_thread(
             ^^^^^^^^^^^^^^^^^^^^^^^^
    ...<7 lines>...
    )
    ^
  File "/usr/lib/python3.13/asyncio/threads.py", line 25, in to_thread
    return await loop.run_in_executor(None, func_call)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.13/concurrent/futures/thread.py", line 59, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/home/khteh/.local/share/virtualenvs/rag-agent-YeW3dxEa/lib/python3.13/site-packages/langgraph/checkpoint/postgres/base.py", line 239, in _dump_writes
    *self.serde.dumps_typed(value),
     ~~~~~~~~~~~~~~~~~~~~~~^^^^^^^
  File "/home/khteh/.local/share/virtualenvs/rag-agent-YeW3dxEa/lib/python3.13/site-packages/langgraph/checkpoint/serde/jsonplus.py", line 219, in dumps_typed
    raise exc
  File "/home/khteh/.local/share/virtualenvs/rag-agent-YeW3dxEa/lib/python3.13/site-packages/langgraph/checkpoint/serde/jsonplus.py", line 213, in dumps_typed
    return "msgpack", _msgpack_enc(obj)
                      ~~~~~~~~~~~~^^^^^
  File "/home/khteh/.local/share/virtualenvs/rag-agent-YeW3dxEa/lib/python3.13/site-packages/langgraph/checkpoint/serde/jsonplus.py", line 639, in _msgpack_enc
    return ormsgpack.packb(data, default=_msgpack_default, option=_option)
           ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: Type is not msgpack serializable: Send

I print out the "messages" at the controller method:

messages: [
    HumanMessage(content='who are you?', additional_kwargs={}, response_metadata={}, id='d14c995f-75cb-4537-81e2-3df2aca4844b'), 
    HumanMessage(content='who are you?', additional_kwargs={}, response_metadata={}, id='2f868449-4d09-406d-a611-125587f42a5b'), 
    HumanMessage(content='who are you?', additional_kwargs={}, response_metadata={}, id='fd9e3d75-4dc3-44f0-9f06-d78e4e9a12a4'), 
    HumanMessage(content='who are you?', additional_kwargs={}, response_metadata={}, id='6f4f6de0-5fb1-4f12-848b-21e9a5ffe6c0'), 
    HumanMessage(content='Who are you?', additional_kwargs={}, response_metadata={}, id='19cdae61-2c4f-40ad-95e9-7377e0456f76'), 
    AIMessage(content='', additional_kwargs={}, response_metadata={'model': 'llama3.3', 'created_at': '2025-06-11T11:08:42.691480265Z', 'done': True, 'done_reason': 'stop', 'total_duration': 95834251406, 'load_duration': 15437716, 'prompt_eval_count': 864, 'prompt_eval_duration': 9162860269, 'eval_count': 42, 'eval_duration': 86650921468, 'model_name': 'llama3.3'}, name='RAG ReAct Agent', id='run--9b6bcd48-f0ad-4575-b036-b31320a63b67-0', tool_calls=[{'name': 'upsert_memory', 'args': {'content': 'The user asked "who are you?" multiple times.', 'context': 'This was mentioned at the beginning of the conversation.'}, 'id': '5c056736-e86c-4382-bc3b-202fd6ed26e1', 'type': 'tool_call'}], usage_metadata={'input_tokens': 864, 'output_tokens': 42, 'total_tokens': 906}), 
    ToolMessage(content='Error: AttributeError("\'StructuredTool\' object has no attribute \'__name__\'")\n Please fix your mistakes.', name='upsert_memory', id='9d8c5e97-461d-42e5-b892-cb3590c07d5c', tool_call_id='5c056736-e86c-4382-bc3b-202fd6ed26e1', status='error'), 
    AIMessage(content='I am Bob, a helpful AI assistant. I provide accurate answers to your questions and have the ability to call tools to assist with tasks. I also save conversation memory using the available tool provided. Is there anything else I can help you with?', additional_kwargs={}, response_metadata={'model': 'llama3.3', 'created_at': '2025-06-11T11:10:27.849479911Z', 'done': True, 'done_reason': 'stop', 'total_duration': 105066415850, 'load_duration': 36994621, 'prompt_eval_count': 170, 'prompt_eval_duration': 12098652192, 'eval_count': 50, 'eval_duration': 92928009203, 'model_name': 'llama3.3'}, name='RAG ReAct Agent', id='run--f31dc79e-e1aa-4b14-90ee-03df4c097e16-0', usage_metadata={'input_tokens': 170, 'output_tokens': 50, 'total_tokens': 220}),
    HumanMessage(content='Who are you?', additional_kwargs={}, response_metadata={}, id='7fcb39fa-e4e6-4c41-8b4a-4464571dd25f'), 
    HumanMessage(content='Who are you?', additional_kwargs={}, response_metadata={}, id='ed6c7b1b-eb0c-4c15-af92-c7c7cc76f06d'), 
    HumanMessage(content='Who are you?', additional_kwargs={}, response_metadata={}, id='79e50d31-5c79-4f09-99ba-ccd25dacb9db'), 
    AIMessage(content='', additional_kwargs={}, response_metadata={'model': 'llama3.3', 'created_at': '2025-06-11T11:13:01.19188341Z', 'done': True, 'done_reason': 'stop', 'total_duration': 94357052710, 'load_duration': 20104817, 'prompt_eval_count': 1005, 'prompt_eval_duration': 8655724249, 'eval_count': 43, 'eval_duration': 85658548906, 'model_name': 'llama3.3'}, name='RAG ReAct Agent', id='run--85c25dbd-ea6e-43de-8ef4-12b3785f08bf-0', tool_calls=[{'name': 'upsert_memory', 'args': {'content': 'The user asked "who are you?" multiple times.', 'context': 'This was mentioned at the beginning of the conversation.'}, 'id': '3a34ef1c-e58b-4b39-8c25-d742d236a121', 'type': 'tool_call'}], usage_metadata={'input_tokens': 1005, 'output_tokens': 43, 'total_tokens': 1048}), 
    ToolMessage(content='Error: AssertionError()\n Please fix your mistakes.', name='upsert_memory', id='7e955f89-4928-493b-953e-aae88410ae33', tool_call_id='3a34ef1c-e58b-4b39-8c25-d742d236a121', status='error'), 

BASH terminal:

================================ Human Message =================================

Who are you?
================================== Ai Message ==================================
Name: RAG ReAct Agent
Tool Calls:
  upsert_memory (31f2e5b2-c5a8-4763-8ff1-5c64c62da007)
 Call ID: 31f2e5b2-c5a8-4763-8ff1-5c64c62da007
  Args:
    content: The user asked "who are you?" multiple times.
    context: This was mentioned at the beginning of the conversation.
================================= Tool Message =================================
Name: upsert_memory

Error: AssertionError()
 Please fix your mistakes.

The tool:

@tool(parse_docstring=True)
async def upsert_memory(
    content: str,
    context: str,
    *,
    memory_id: Optional[uuid7str] = None,
    # Hide these arguments from the model.
    config: Annotated[RunnableConfig, InjectedToolArg],
    store: Annotated[BaseStore, InjectedStore()],
):
    """Upsert a memory in the database.

    If a memory conflicts with an existing one, then just UPDATE the
    existing one by passing in memory_id - don't create two memories
    that are the same. If the user corrects a memory, UPDATE it.

    Args:
        content: The main content of the memory. For example:
            "User expressed interest in learning about French."
        context: Additional context for the memory. For example:
            "This was mentioned while discussing career options in Europe."
        memory_id: ONLY PROVIDE IF UPDATING AN EXISTING MEMORY.
        The memory to overwrite.
    """
    logging.debug(f"upsert_memory content: {content}, context: {context}, memory_id: {memory_id}")
    mem_id = memory_id or uuid7str()
    user_id = Configuration.from_runnable_config(config).user_id
    logging.debug(f"upsert_memory user_id: {user_id}")
    await store.aput(
        ("memories", user_id),
        key=str(mem_id),
        value={"content": content, "context": context},
    )
    logging.debug(f"upsert_memory mem_id: {mem_id}")
    return f"Stored memory {mem_id}"

khteh avatar Jun 11 '25 11:06 khteh

Go to jsonplus.py

venv/lib/python3.13/site-packages/langgraph/checkpoint/serde/jsonplus.py

def message_to_dict(msg):
    # Handles HumanMessage, AIMessage, ToolMessage, etc.
    if hasattr(msg, "to_dict"):
        return msg.to_dict()
    elif isinstance(msg, dict):
        return msg
    else:
        # Fallback: try to extract content and role
        return {"role": getattr(msg, "role", "user"), "content": str(getattr(msg, "content", msg))}
    
def _msgpack_enc(data: Any) -> bytes:
    return ormsgpack.packb(message_to_dict(data), default=_msgpack_default, option=_option)

And refresh your environment it should fix this issue as it is working for me with following version:

Name: langgraph
Version: 0.4.8
Summary: Building stateful, multi-actor applications with LLMs
Home-page: 
Author: 
Author-email: 
License-Expression: MIT
Location: /Users/h0p0303/Documents/PersonalWork/PersonalRepos/gen-ai-learning-session/venv/lib/python3.13/site-packages
Requires: langchain-core, langgraph-checkpoint, langgraph-prebuilt, langgraph-sdk, pydantic, xxhash
Required-by: 

Raised PR for the same: https://github.com/langchain-ai/langgraph/pull/5115

Please have a look and use it as per your feasibility.

dshimanshupant avatar Jun 16 '25 07:06 dshimanshupant

Actually this should be a common problem when langgraph try to serialize non-primitive objects, especially in langgraph-codeact (like VideoFileClip object in moviepy code).

Symbolk avatar Jun 20 '25 01:06 Symbolk

But it seems that the dataframe type has not been processed: Type is not msgpack serializable: DataFrame

Wwwduojin avatar Jun 26 '25 07:06 Wwwduojin

Why hasn't this branch been merged yet?

geknow avatar Jul 10 '25 08:07 geknow

This case also shows when chain contains langchain_core.tools.tool with response_format="content_and_artifact"

DocWARG avatar Jul 25 '25 21:07 DocWARG

This case also shows when chain contains langchain_core.tools.tool with response_format="content_and_artifact"

Can confirm, that this fails. If it is set to response_format="content", it works.

phal0r avatar Aug 07 '25 20:08 phal0r