Dangling SSE (Server Send Event) Streams
Environment
"h3": "^1.14.0"
Reproduction
- Have endless SSE source
- Proxy it with H3
- Open proxied stream in browser (2nd route)
- Close browser window
- Proxied route still recieving data -> not terminated
import { createApp, createRouter, eventHandler, sendStream } from "h3";
import { $fetch } from "ofetch";
export const app = createApp();
const router = createRouter();
app.use(router);
router.get(
"/sse/stream",
eventHandler(async (event) => {
// replace with any (endless) sse source, I used pythons FastAPI + sse_starlette
const response = await $fetch(
"http://localhost:8000/rooms/9b3209ef-d675-4619-b510-57d5ed559915/queue/sse",
{
responseType: "stream",
method: "GET",
headers: {
cookie: "participant_session=93bf571c-d1f5-470a-8bd7-f66d8e6ea9ac;",
},
},
);
event.node.res.setHeader(
"Content-Type",
"text/event-stream; charset=utf-8",
);
event.node.res.setHeader("Cache-Control", "no-cache");
event.node.res.setHeader("Connection", "keep-alive");
event.node.res.setHeader("X-Accel-Buffering", "no");
return sendStream(event, response);
}),
);
router.get(
"/",
eventHandler(async () => {
return new Response(
`<script>
const es = new EventSource("/sse/stream");
es.onmessage = (e) => {
console.log(e.data);
};
</script>`,
{
headers: {
"Content-Type": "text/html",
},
},
);
}),
);
Describe the bug
I initially attempted to proxy my Python backend through a Nuxt.js application, which uses Nitro (and thereby h3). However, I encountered an issue where Server-Sent Events (SSE) connections were not being properly closed when the client disconnected.
After investigating, I suspect the root cause lies in the sendStream implementation. It appears that sendStream waits for the stream to end naturally but doesn't account for cases where the stream is endless and the client disconnects. As a result, data continues to be sent even though the client is no longer connected - essentially sending it into the void and keeping dangling connections.
Additional context
Dirty fix
Following modification fixed the issue:
// Native Web Streams
if (
hasProp(stream, "pipeTo") &&
typeof (stream as ReadableStream).pipeTo === "function"
) {
const abort_controller = new AbortController();
const doAbort = () => {
// shared abort function for both close and error events
console.log("[h3] Aborting stream due to request close or error.");
abort_controller.abort("Client closed or lost connection.");
};
event.node.res.on("close", doAbort);
event.node.res.on("error", doAbort);
event.node.res.on("finish", doAbort);
return (stream as ReadableStream)
.pipeTo(
new WritableStream({
write(chunk) {
event.node.res.write(chunk);
},
}),
{ signal: abort_controller.signal },
)
.then(() => {
console.log("[h3] Stream finished.");
event.node.res.end();
});
}
Logs
Seems like a valid fix 🤔
Seems like a valid fix 🤔
Should I create PR with it?
Same goes for when the sse connection is cut off for example turning of the backend which serves the sse endpoint.
The client still thinks the sse connection is good (proxyRequest) example implementation