Kubernetes probe causes ECONNRESET
Running this in a Kubernetes environment with K8s probe(s) configured, such as
livenessProbe:
httpGet:
path: /health
port: 7801
timeoutSeconds: 60
causes errors to be logged, seemingly upon each probe.
Example:
Mon Jun 02 2025 13:24:11 GMT+0000 [verbose] - [health.js] GET /health [200] - returning server health.
Mon Jun 02 2025 13:24:11 GMT+0000 [error] - [server] Client error: read ECONNRESET, destroying socket.
Error: read ECONNRESET
at TCP.onStreamRead (node:internal/stream_base_commons:217:20)
Mon Jun 02 2025 13:24:11 GMT+0000 [error] - [server] Socket error: read ECONNRESET
Error: read ECONNRESET
at TCP.onStreamRead (node:internal/stream_base_commons:217:20)
@mjiderhamn, this is what ChatGPT said ...
That ECONNRESET log is a known and benign issue in Node.js servers (especially Express-based ones) when Kubernetes probes run against them. Let’s unpack what’s actually happening 👇
🧠 What’s happening
The key part of the error:
This means:
The client (Kubernetes) opened a TCP connection to
/health.It closed the connection abruptly after receiving the HTTP response (usually because it doesn’t care about reading the full response body).
Node.js interprets that as a socket read reset, and Express logs it as an error.
So the container is perfectly fine, the probe succeeds (returns 200),
but Node logs an error message because of how the connection is closed.
⚙️ Why this happens
Kubernetes’ HTTP probes use a very simple internal Go HTTP client.
When it gets a 200 response, it immediately closes the connection without waiting for Node to finish writing out the response body.
In Node (especially before v20), this can trigger:
…because the server tries to write to a socket that’s just been closed by the client.
The node-export-server doesn’t suppress this by default, so you see noisy “errors” for every probe.
✅ The good news
This is harmless. The probe still works, and your container is not restarting.
You can safely ignore it.
That said, if you don’t want these logs cluttering your output, you have a few options.
🧩 Mitigation Options
Option 1 – Ignore the log noise (most common)
Just leave it as is — it won’t affect liveness/readiness behavior.
Pros:
Simple, nothing changes.
Cons:Logs look messy, especially if you have probes every few seconds.
Option 2 – Use TCP probe instead of HTTP
Since /health just confirms the process is up, a TCP probe can replace the HTTP one.
✅ Advantages:
No HTTP traffic at all → no
ECONNRESETlogs.Still confirms Node’s HTTP listener is alive.
⚠️ Trade-off:
Doesn’t verify that the HTTP handler itself responds with 200 — only that the port is open.
This is often good enough for liveness, while keeping readiness as an HTTP probe.
Option 3 – Keep HTTP for readiness, TCP for liveness
This hybrid is ideal for Node-based web apps:
✅ Effect:
Liveness just checks “is the process listening?”
Readiness checks “is the HTTP server responsive?”
No repeated ECONNRESETs in logs.
Option 4 – Modify the Node server (less ideal)
If you maintain your own fork or wrapper around node-export-server,
you could catch and suppress the ECONNRESET like this in your Express app:
…but that’s not always practical since you’re using a prebuilt image.
🔎 TL;DR Summary
| Problem | Cause | Impact | Fix |
|---|---|---|---|
| read ECONNRESET in logs | K8s probe closes socket early | Harmless | Ignore or use TCP liveness probe |
| Affects readiness? | No | Probe still succeeds | — |
| Affects stability? | No | Just noisy logs | — |
| Best practice | TCP for liveness, HTTP for readiness | ✅ Recommended |
✅ Conclusion:
The ECONNRESET errors are just noise caused by how K8s probes close connections.
Your pod is fine.
Use TCP liveness + HTTP readiness if you want clean logs and reliable health monitoring.