When I use two pages to access the same MCP server and run the tool, I encounter "Error POSTing to endpoint (HTTP 500): SSE connection not established". Even if I start two inspectors on the same computer using different ports, this problem still exists
Describe the bug When I started the inspector, I opened two pages to access the same MCP server and run the tool at the same time, but it prompted me with "Error POSTing to endpoint (HTTP 500): SSE connection not established". Even though I started two inspectors on the same computer using different ports, this problem still exists
To Reproduce Steps to reproduce the behavior:
- Launch the Inspector and start the mcp server. 2.Open two pages and visit ‘http://127.0.0.1:6274/’ 3.Connect to the same MCP server and run the tool
Expected behavior Both pages can return the tool execution results normally
Logs Received message for sessionId c265c82d-4810-4d59-9cd5-da2ca7c39baa Error in /message route: Error: SSE connection not established at SSEServerTransport.handlePostMessage (file:///D:/DevEnv/Node/node_cache/_npx/4482475f1ce046f8/node_modules/@modelcontextprotocol/sdk/dist/esm/server/sse.js:61:19) at file:///D:/DevEnv/Node/node_cache/_npx/4482475f1ce046f8/node_modules/@modelcontextprotocol/inspector/server/build/index.js:262:25 at Layer.handleRequest (D:\DevEnv\Node\node_cache_npx\4482475f1ce046f8\node_modules\router\lib\layer.js:152:17) at next (D:\DevEnv\Node\node_cache_npx\4482475f1ce046f8\node_modules\router\lib\route.js:157:13) at Route.dispatch (D:\DevEnv\Node\node_cache_npx\4482475f1ce046f8\node_modules\router\lib\route.js:117:3) at handle (D:\DevEnv\Node\node_cache_npx\4482475f1ce046f8\node_modules\router\index.js:435:11) at Layer.handleRequest (D:\DevEnv\Node\node_cache_npx\4482475f1ce046f8\node_modules\router\lib\layer.js:152:17) at D:\DevEnv\Node\node_cache_npx\4482475f1ce046f8\node_modules\router\index.js:295:15 at processParams (D:\DevEnv\Node\node_cache_npx\4482475f1ce046f8\node_modules\router\index.js:582:12) at next (D:\DevEnv\Node\node_cache_npx\4482475f1ce046f8\node_modules\router\index.js:291:5) Error [ERR_HTTP_HEADERS_SENT]: Cannot set headers after they are sent to the client at ServerResponse.setHeader (node:_http_outgoing:699:11) at ServerResponse.header (D:\DevEnv\Node\node_cache_npx\4482475f1ce046f8\node_modules\express\lib\response.js:684:10) at ServerResponse.json (D:\DevEnv\Node\node_cache_npx\4482475f1ce046f8\node_modules\express\lib\response.js:247:10) at file:///D:/DevEnv/Node/node_cache/_npx/4482475f1ce046f8/node_modules/@modelcontextprotocol/inspector/server/build/index.js:266:25 at process.processTicksAndRejections (node:internal/process/task_queues:105:5) Received message for sessionId c265c82d-4810-4d59-9cd5-da2ca7c39baa Error in /message route: Error: SSE connection not established at SSEServerTransport.handlePostMessage (file:///D:/DevEnv/Node/node_cache/_npx/4482475f1ce046f8/node_modules/@modelcontextprotocol/sdk/dist/esm/server/sse.js:61:19) at file:///D:/DevEnv/Node/node_cache/_npx/4482475f1ce046f8/node_modules/@modelcontextprotocol/inspector/server/build/index.js:262:25 at Layer.handleRequest (D:\DevEnv\Node\node_cache_npx\4482475f1ce046f8\node_modules\router\lib\layer.js:152:17) at next (D:\DevEnv\Node\node_cache_npx\4482475f1ce046f8\node_modules\router\lib\route.js:157:13) at Route.dispatch (D:\DevEnv\Node\node_cache_npx\4482475f1ce046f8\node_modules\router\lib\route.js:117:3) at handle (D:\DevEnv\Node\node_cache_npx\4482475f1ce046f8\node_modules\router\index.js:435:11) at Layer.handleRequest (D:\DevEnv\Node\node_cache_npx\4482475f1ce046f8\node_modules\router\lib\layer.js:152:17) at D:\DevEnv\Node\node_cache_npx\4482475f1ce046f8\node_modules\router\index.js:295:15 at processParams (D:\DevEnv\Node\node_cache_npx\4482475f1ce046f8\node_modules\router\index.js:582:12) at next (D:\DevEnv\Node\node_cache_npx\4482475f1ce046f8\node_modules\router\index.js:291:5) Error [ERR_HTTP_HEADERS_SENT]: Cannot set headers after they are sent to the client at ServerResponse.setHeader (node:_http_outgoing:699:11) at ServerResponse.header (D:\DevEnv\Node\node_cache_npx\4482475f1ce046f8\node_modules\express\lib\response.js:684:10) at ServerResponse.json (D:\DevEnv\Node\node_cache_npx\4482475f1ce046f8\node_modules\express\lib\response.js:247:10) at file:///D:/DevEnv/Node/node_cache/_npx/4482475f1ce046f8/node_modules/@modelcontextprotocol/inspector/server/build/index.js:266:25 at process.processTicksAndRejections (node:internal/process/task_queues:105:5) Received message for sessionId c265c82d-4810-4d59-9cd5-da2ca7c39baa Error in /message route: Error: SSE connection not established at SSEServerTransport.handlePostMessage (file:///D:/DevEnv/Node/node_cache/_npx/4482475f1ce046f8/node_modules/@modelcontextprotocol/sdk/dist/esm/server/sse.js:61:19) at file:///D:/DevEnv/Node/node_cache/_npx/4482475f1ce046f8/node_modules/@modelcontextprotocol/inspector/server/build/index.js:262:25 at Layer.handleRequest (D:\DevEnv\Node\node_cache_npx\4482475f1ce046f8\node_modules\router\lib\layer.js:152:17) at next (D:\DevEnv\Node\node_cache_npx\4482475f1ce046f8\node_modules\router\lib\route.js:157:13) at Route.dispatch (D:\DevEnv\Node\node_cache_npx\4482475f1ce046f8\node_modules\router\lib\route.js:117:3) at handle (D:\DevEnv\Node\node_cache_npx\4482475f1ce046f8\node_modules\router\index.js:435:11) at Layer.handleRequest (D:\DevEnv\Node\node_cache_npx\4482475f1ce046f8\node_modules\router\lib\layer.js:152:17) at D:\DevEnv\Node\node_cache_npx\4482475f1ce046f8\node_modules\router\index.js:295:15 at processParams (D:\DevEnv\Node\node_cache_npx\4482475f1ce046f8\node_modules\router\index.js:582:12) at next (D:\DevEnv\Node\node_cache_npx\4482475f1ce046f8\node_modules\router\index.js:291:5) Error [ERR_HTTP_HEADERS_SENT]: Cannot set headers after they are sent to the client at ServerResponse.setHeader (node:_http_outgoing:699:11) at ServerResponse.header (D:\DevEnv\Node\node_cache_npx\4482475f1ce046f8\node_modules\express\lib\response.js:684:10) at ServerResponse.json (D:\DevEnv\Node\node_cache_npx\4482475f1ce046f8\node_modules\express\lib\response.js:247:10) at file:///D:/DevEnv/Node/node_cache/_npx/4482475f1ce046f8/node_modules/@modelcontextprotocol/inspector/server/build/index.js:266:25 at process.processTicksAndRejections (node:internal/process/task_queues:105:5) Received message for sessionId c265c82d-4810-4d59-9cd5-da2ca7c39baa Error in /message route: Error: SSE connection not established at SSEServerTransport.handlePostMessage (file:///D:/DevEnv/Node/node_cache/_npx/4482475f1ce046f8/node_modules/@modelcontextprotocol/sdk/dist/esm/server/sse.js:61:19) at file:///D:/DevEnv/Node/node_cache/_npx/4482475f1ce046f8/node_modules/@modelcontextprotocol/inspector/server/build/index.js:262:25 at Layer.handleRequest (D:\DevEnv\Node\node_cache_npx\4482475f1ce046f8\node_modules\router\lib\layer.js:152:17) at next (D:\DevEnv\Node\node_cache_npx\4482475f1ce046f8\node_modules\router\lib\route.js:157:13) at Route.dispatch (D:\DevEnv\Node\node_cache_npx\4482475f1ce046f8\node_modules\router\lib\route.js:117:3) at handle (D:\DevEnv\Node\node_cache_npx\4482475f1ce046f8\node_modules\router\index.js:435:11) at Layer.handleRequest (D:\DevEnv\Node\node_cache_npx\4482475f1ce046f8\node_modules\router\lib\layer.js:152:17) at D:\DevEnv\Node\node_cache_npx\4482475f1ce046f8\node_modules\router\index.js:295:15 at processParams (D:\DevEnv\Node\node_cache_npx\4482475f1ce046f8\node_modules\router\index.js:582:12) at next (D:\DevEnv\Node\node_cache_npx\4482475f1ce046f8\node_modules\router\index.js:291:5) Error [ERR_HTTP_HEADERS_SENT]: Cannot set headers after they are sent to the client at ServerResponse.setHeader (node:_http_outgoing:699:11) at ServerResponse.header (D:\DevEnv\Node\node_cache_npx\4482475f1ce046f8\node_modules\express\lib\response.js:684:10) at ServerResponse.json (D:\DevEnv\Node\node_cache_npx\4482475f1ce046f8\node_modules\express\lib\response.js:247:10) at file:///D:/DevEnv/Node/node_cache/_npx/4482475f1ce046f8/node_modules/@modelcontextprotocol/inspector/server/build/index.js:266:25 at process.processTicksAndRejections (node:internal/process/task_queues:105:5)
This is a similar challenge for OneDrive and SharePoint - the files are too large, varied, and the volume too high to support RPC responses for file content. We could stream resources if the SDKs supplied the raw response object to tools - or add a streaming resource type indicated by some property perhaps.
@patrick-rodgers I did speak to some folks at Anthropic regarding this, they are open to suggestions if they feel right for the spec. It seemed we shared both an understanding of the issue, but also some conflicting feelings about how to solve it in a way that feels right for the spec.
I'd definitely encourage anyone interested to share proposals. Something will have to solve this eventually.
@SamMorrowDrums - let me know if you want to chat on it - I don't have a strong opinion currently on how to solve it, just agree that encoded RPC won't scale for us, so +1 to the need. Maybe we can collab on a proposal? I mean, download urls are easy - but would love a native way to support streaming resources.
Say I call for the resource file://{some key}/content <- it would be super cool to stream that back.
Or maybe that doesn't fit and download url is the right path and there is a new resource type for external/downloadable content?
Not sure, but interested in working through it.
For resources, perhaps we could introduce a property like supportsDirectRead that would indicate whether the client can read the resource contents via the uri without going through the MCP server. For example, a resource with a file:// uri might have supportsDirectRead: true. The resource would still be readable via the MCP server, but clients that support the property and the URI's scheme could operate more efficiently.
For tool call results, there have been some discussions about including resources in results (for example, https://github.com/modelcontextprotocol/modelcontextprotocol/discussions/90#discussioncomment-12980950). If we supported something like that, then supportsDirectRead could apply there as well.
Having a sort of dual-pattern download like that sounds like a good idea for this in particular — HTTP's capabilities for large downloads via chunked transfers are already good for what they are, and it'd be complex to get something similar into MCP's transport layer in an efficient and generalizable way across sHTTP, stdio, and whatever else may be adopted in the future. A direct download also takes the load off of the MCP server, so it doesn't need to pull double duty as a fileserver proxy.
At the same time, it'll be useful to be able to go through pure MCP if desired, even if it means a less-efficient transfer. Ideally, as MCP's own streaming support improves, it'll become useful in more and more contexts that would otherwise mandate direct streaming, but for now we would have an escape hatch we can drop down to where it makes sense to do so.
With that being said — on second thought, offering a direct download URL on the resource itself also gives client SDKs the leeway to do this completely transparently. I can easily imagine how an SDK might handle a resource read by doing a direct fetch internally, and then just exposing it as a "Resource" to the caller, despite never having actually made a resources/read call to the server. This could be a very versatile way of supporting this.
I can easily imagine how an SDK might handle a resource read by doing a direct fetch internally, and then just exposing it as a "Resource" to the caller, despite never having actually made a
resources/readcall to the server. This could be a very versatile way of supporting this.
Ah, that's a good idea! Currently, in the TypeScript SDK (and perhaps other SDKs too), readResource accepts the params for the resources/read JSON-RPC call. But, if it could accept a resource object returned by resources/list, including the supportsDirectRead property, then it could transparently switch to reading directly when possible.
Yup - it crossed my mind because the Java SDK has an overload that takes a resource reference directly (as you note, useful for feeding from resources/list directly into resources/read), enabling it to be aware of those concepts if they were added to resources proper.
The caveat to this with private online resources is that we need to provide signed download URIs for direct download. Hard to do one-size fits all.
A direct download escape hatch on its own would be a starting point, but agreed, private resources aren't addressed by just adding that. Presigned URLs exist, but aren't suitable for every use case, and I would think they're nontrivial to implement (S3 and some S3-compatible stores like R2 support this but I'm unsure how broadly it exists beyond that).
I'd think defining if/how auth could be maintained across the MCP server connection and a direct download would be something that could be added on top of this without any issues, but we should get someone more familiar with auth to weigh in first, I think.
I opened a PR for my proposal: https://github.com/modelcontextprotocol/modelcontextprotocol/pull/607.
The caveat to this with private online resources is that we need to provide signed download URIs for direct download. Hard to do one-size fits all.
What blockers are you anticipating? For example, is the problem that the URI can expire?
One possible solution, depending on the scenario, could be to use an HTTP redirect. The resource URI could point to a server that you control (perhaps the MCP server itself), and the server could redirect to the signed URI.
Would you propose support for offering the resource in both formats and allowing the client to choose or it has to be one or the other?
My main concern is simply scaling this to the size and potential traffic of sites like GitHub in my case, so a redirect can work potentially. Needing to handle huge scale at launch is the requirement for us and I'm totally open to any solution that is reasonably aligned with the protocol and allows us to safely handle scaling.
Our signed URLs for repository content do expire and it would be ideal for us to allow the host application to consume them certainly, and they won't be the same as the resource URI. That's all I was thinking.
Supporting in-memory full base64 file content would expose us to potential ddos.
Would you propose support for offering the resource in both formats and allowing the client to choose or it has to be one or the other?
I interpret the proposal as supporting both. Essentially something like this flow:
sequenceDiagram
participant Application as Host Application
participant Client as MCP SDK Client
participant Server as MCP Server
participant CDN
Server->>Client: <resource reference from a tool call or something>
Client->>Application: <ref>
Application->>Client: readResource(ref)
alt resource.supportsDirectRead is supported and true
Client->>CDN: fetch
CDN->>Client: <resource content>
else resource.supportsDirectRead is not supported or false, or the resource URI is misunderstood, etc.
Client->>Server: resources/read
Server->>Client: <resource content>
end
Client->>Application: <resource object>
Linking #1597