_handle_stateless_request ClosedResourceError
Initial Checks
- [x] I confirm that I'm using the latest version of MCP Python SDK
- [x] I confirm that I searched for my issue in https://github.com/modelcontextprotocol/python-sdk/issues before opening this issue
Description
Every request will try an exception
use _handle_stateless_request > run_stateless_server > http_transport.connect() > for session_message in write_stream_reader
It will go in twice, and an exception(yio.ClosedResourceError) will be thrown when the second request has been closed
Example Code
app = Server("mcp-server")
def main(port: int, transport: str, dev: bool, limit_concurrency: int) -> int:
# Create the session manager with true stateless mode
from starlette.applications import Starlette
from starlette.routing import Mount, Route
session_manager = StreamableHTTPSessionManager(
app=app,
event_store=None,
json_response=True,
stateless=True,
)
@contextlib.asynccontextmanager
async def lifespan(app: Starlette) -> AsyncIterator[None]:
"""Context manager for managing session manager lifecycle."""
async with session_manager.run():
logger.info("Application started with StreamableHTTP session manager!")
try:
yield
finally:
logger.info("Application shutting down...")
async def handle_streamable_http(
scope: Scope, receive: Receive, send: Send
) -> None:
await session_manager.handle_request(scope, receive, send)
starlette_app = Starlette(
debug=dev,
routes=[
Mount("/message", app=handle_streamable_http),
],
lifespan=lifespan,
)
import uvicorn
uvicorn.run(starlette_app, host="0.0.0.0", port=port, http="httptools", limit_concurrency=limit_concurrency)
if __name__ == "__main__":
asyncio.run(main())
view logs:
[2025-07-31 15:10:41,884][MainThread:19320][task_id:mcp.server.streamable_http][streamable_http.py:630][INFO][Terminating session: None]
[2025-07-31 15:10:41,885][MainThread:19320][task_id:mcp.server.streamable_http][streamable_http.py:880][ERROR][Error in message router]
Traceback (most recent call last):
File "d:\project\bpaas\apihub-mcp-server\.venv\Lib\site-packages\mcp\server\streamable_http.py", line 831, in message_router
async for session_message in write_stream_reader:
...<46 lines>...
)
File "d:\project\bpaas\apihub-mcp-server\.venv\Lib\site-packages\anyio\abc\_streams.py", line 35, in __anext__
return await self.receive()
^^^^^^^^^^^^^^^^^^^^
File "d:\project\bpaas\apihub-mcp-server\.venv\Lib\site-packages\anyio\streams\memory.py", line 111, in receive
return self.receive_nowait()
~~~~~~~~~~~~~~~~~~~^^
File "d:\project\bpaas\apihub-mcp-server\.venv\Lib\site-packages\anyio\streams\memory.py", line 93, in receive_nowait
raise ClosedResourceError
anyio.ClosedResourceError
Terminating has already released memory first
debug for : first
debug for : second
And the log shows that Terminating should be executed first, followed by create_task_group
Python & MCP Python SDK
MCP version: 1.12.2
Python:3.13
我也遇到了一模一样的问题, 目前只有使用1.11.0版本的mcp才能解决问题,是否有其他解决办法: MCP version: 1.12.0 anyio version: 4.7.0 Python: 3.13.5
One more version of similar exception from MPC Server with streamable-http transport:
+ Exception Group Traceback (most recent call last):
| File "/opt/conda/envs/my-env/lib/python3.11/site-packages/mcp/server/streamable_http_manager.py", line 241, in run_server
| await self.app.run(
| File "/opt/conda/envs/my-env/lib/python3.11/site-packages/mcp/server/lowlevel/server.py", line 614, in run
| async with AsyncExitStack() as stack:
| File "/opt/conda/envs/my-env/lib/python3.11/contextlib.py", line 745, in __aexit__
| raise exc_details[1]
| File "/opt/conda/envs/my-env/lib/python3.11/contextlib.py", line 728, in __aexit__
| cb_suppress = await cb(*exc_details)
| ^^^^^^^^^^^^^^^^^^^^^^
| File "/opt/conda/envs/my-env/lib/python3.11/site-packages/mcp/shared/session.py", line 218, in __aexit__
| return await self._task_group.__aexit__(exc_type, exc_val, exc_tb)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "/opt/conda/envs/my-env/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 781, in __aexit__
| raise BaseExceptionGroup(
| ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception)
+-+---------------- 1 ----------------
| Exception Group Traceback (most recent call last):
| File "/opt/conda/envs/my-env/lib/python3.11/site-packages/mcp/server/lowlevel/server.py", line 625, in run
| async with anyio.create_task_group() as tg:
| File "/opt/conda/envs/my-env/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 781, in __aexit__
| raise BaseExceptionGroup(
| ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception)
+-+---------------- 1 ----------------
| Traceback (most recent call last):
| File "/opt/conda/envs/my-env/lib/python3.11/site-packages/mcp/server/lowlevel/server.py", line 648, in _handle_message
| await self._handle_request(message, req, session, lifespan_context, raise_exceptions)
| File "/opt/conda/envs/my-env/lib/python3.11/site-packages/mcp/server/lowlevel/server.py", line 712, in _handle_request
| await message.respond(response)
| File "/opt/conda/envs/my-env/lib/python3.11/site-packages/mcp/shared/session.py", line 131, in respond
| await self._session._send_response( # type: ignore[reportPrivateUsage]
| File "/opt/conda/envs/my-env/lib/python3.11/site-packages/mcp/shared/session.py", line 329, in _send_response
| await self._write_stream.send(session_message)
| File "/opt/conda/envs/my-env/lib/python3.11/site-packages/anyio/streams/memory.py", line 243, in send
| self.send_nowait(item)
| File "/opt/conda/envs/my-env/lib/python3.11/site-packages/anyio/streams/memory.py", line 212, in send_nowait
| raise ClosedResourceError
| anyio.ClosedResourceError
+------------------------------------
Observation: MPC client receives the response (that an error occured), so it's not connected a network issue. Python 3.11.13 MCP 1.18.0 MCP Client from llama-index-tools-mcp 0.4.2
Reproducible example here https://github.com/modelcontextprotocol/python-sdk/issues/1190#issuecomment-3429054580
Same issue. Also happens for FastMCP 2.13.0.
Same issue here. Fast MCP 2.10.6.
Closing as duplicate of #1190, which is tracking this issue. The root cause analysis and fix are being tracked there.