buffer's worker closed
What is the issue?
Installing edge-25.11.1, the proxy doesn't start.
How can it be reproduced?
The problem manifests during Helm-based deployment installation. Troubleshooting steps attempted include: certificate regeneration, service account deletion and recreation. I have verified behavior across different security contexts - root vs non-root users, and both privileged vs unprivileged permission modes.
values.yaml
identityTrustAnchorsPEM: |
-----BEGIN CERTIFICATE-----
*******************************
-----END CERTIFICATE-----
identity:
issuer:
tls:
crtPEM: |
-----BEGIN CERTIFICATE-----
*******************************
-----END CERTIFICATE-----
keyPEM: |
-----BEGIN EC PRIVATE KEY-----
*******************************
-----END EC PRIVATE KEY-----
controllerReplicas: 3
enablePodDisruptionBudget: true
deploymentStrategy:
rollingUpdate:
maxUnavailable: 1
maxSurge: 25%
enablePodAntiAffinity: true
proxy:
resources:
cpu:
request: 100m
memory:
limit: 250Mi
request: 20Mi
controllerResources: &controller_resources
cpu: &controller_resources_cpu
limit: ""
request: 100m
memory:
limit: 250Mi
request: 50Mi
destinationResources: *controller_resources
identityResources:
cpu: *controller_resources_cpu
memory:
limit: 250Mi
request: 10Mi
heartbeatResources: *controller_resources
proxyInjectorResources: *controller_resources
webhookFailurePolicy: Ignore
spValidatorResources: *controller_resources
Logs, error output, etc
pods
linkerd-destination-6dd4b59b7d-f6lsx ● 0/4 CrashLoopBackOff
linkerd-destination-6dd4b59b7d-vsjrl ● 0/4 CrashLoopBackOff
linkerd-destination-666d55cbd7-2cz65 ● 0/4 CrashLoopBackOff
linkerd-destination-666d55cbd7-gn8bn ● 0/4 CrashLoopBackOff
linkerd-identity-7b49f95fbb-cvlnn ● 2/2 Running
linkerd-identity-7b49f95fbb-l5nrv ● 2/2 Running
linkerd-identity-7b49f95fbb-pswwm ● 2/2 Running
linkerd-proxy-injector-fcd88ddbf-8v2tp ● 0/2 CrashLoopBackOff
linkerd-proxy-injector-fcd88ddbf-phbs2 ● 0/2 CrashLoopBackOff
linkerd-proxy-injector-fcd88ddbf-pjttf ● 0/2 CrashLoopBackOff `
proxy
thread 'main' panicked at /__w/linkerd2-proxy/linkerd2-proxy/linkerd/proxy/balance/queue/src/service.rs:73:18:
worker must set a failure if it exits prematurely
stack backtrace:
0: 0x63ae9a41c7ef - <unknown>
1: 0x63ae9978bbe3 - <unknown>
2: 0x63ae9a41c07f - <unknown>
3: 0x63ae9a41c4f3 - <unknown>
4: 0x63ae9a41bb1e - <unknown>
5: 0x63ae9a450218 - <unknown>
6: 0x63ae9a450179 - <unknown>
7: 0x63ae9a450c8c - <unknown>
8: 0x63ae9978a60f - <unknown>
9: 0x63ae997926ba - <unknown>
10: 0x63ae9a06b87a - <unknown>
11: 0x63ae99f04de9 - <unknown>
12: 0x63ae99f01ca9 - <unknown>
13: 0x63ae99f0022c - <unknown>
14: 0x63ae99731141 - <unknown>
15: 0x63ae99c7d3ba - <unknown>
16: 0x63ae99c1da16 - <unknown>
17: 0x63ae996fef57 - <unknown>
18: 0x63ae99a0ee11 - <unknown>
19: 0x63ae999c3ec1 - <unknown>
20: 0x63ae99da4c63 - <unknown>
21: 0x63ae999c5443 - <unknown>
22: 0x7d3abdb2824a - <unknown>
23: 0x7d3abdb28305 - __libc_start_main
24: 0x63ae996aabe1 - <unknown>
25: 0x0 - <unknown>
[ 140.740274s] INFO ThreadId(01) inbound:server{port=8080}:rescue{client.addr=10.111.17.110:37300}: linkerd_app_core::errors::respond: gRPC request failed error=client 10.111.17.110:37300: server: 10.11
1.23.74:8080: server 10.111.23.74:8080: service linkerd-identity-headless.linkerd.svc.cluster.local:8080: buffered service failed: buffer's worker closed unexpectedly error.sources=[server 10.111.23.74:808
0: service linkerd-identity-headless.linkerd.svc.cluster.local:8080: buffered service failed: buffer's worker closed unexpectedly, buffered service failed: buffer's worker closed unexpectedly, buffer's wor
ker closed unexpectedly]
[ 140.740336s] WARN ThreadId(01) inbound:server{port=8080}:rescue{client.addr=10.111.17.110:37300}: linkerd_app_inbound::http::server: Unexpected error error=client 10.111.17.110:37300: server: 10.111.2
3.74:8080: server 10.111.23.74:8080: service linkerd-identity-headless.linkerd.svc.cluster.local:8080: buffered service failed: buffer's worker closed unexpectedly error.sources=[server 10.111.23.74:8080:
service linkerd-identity-headless.linkerd.svc.cluster.local:8080: buffered service failed: buffer's worker closed unexpectedly, buffered service failed: buffer's worker closed unexpectedly, buffer's worker
closed unexpectedly]
output of linkerd check -o short
kubernetes-api
--------------
√ can initialize the client
√ can query the Kubernetes API
kubernetes-version
------------------
√ is running the minimum Kubernetes API version
linkerd-existence
-----------------
√ 'linkerd-config' config map exists
√ heartbeat ServiceAccount exist
√ control plane replica sets are ready
√ no unschedulable pods
DEBU[0001] Retrying on error: pod/linkerd-destination-666d55cbd7-gn8bn container sp-validator is not ready
DEBU[0006] Retrying on error: pod/linkerd-destination-6dd4b59b7d-f6lsx container sp-validator is not ready
DEBU[0014] Retrying on error: pod/linkerd-destination-6dd4b59b7d-f6lsx container sp-validator is not ready
Environment
Kubernetes: 1.31.12 Linkerd: edge-25.11.1 Ubuntu 22.04
Possible solution
No response
Additional context
No response
Would you like to work on fixing this bug?
None
Can you provide more information about your setup? In what environment are you running kubernetes? Can you point at which linkerd edge version did you start encountering this problem?
K8s Rev: v1.31.12 Linkerd: edge-25.11.1| The problem started after upgrading the Kubernetes cluster from 1.30 to 1.31. I tried deploying it to a clean 1.31 cluster, but no problem was found. In the version with stable-2.14.10 it starts, but there are errors in policy-controller: Readiness probe failed: HTTP probe failed with statuscode: 500. How to skip this error?
{},"f:type":{}}}}]},"spec":{"ports":[{"name":"grpc","protocol":"TCP","port":5000,"targetPort":"grpc"},{"name":"http","protocol":"TCP","port":3000,"targetPort":"http"},{"name":"http
, Error("invalid type: null, expected a formatted date and time string or a unix timestamp", line: 1, column: 35950)
2025-11-20T16:03:35.852288Z INFO services: kubert::errors: stream failed error=failed to perform initial object list: Error deserializing response
2025-11-20T16:03:40.874053Z WARN services: kube_client::client: {"kind":"ServiceList","apiVersion":"v1","metadata":{"resourceVersion":"533369491"},"items":[{"metadata":{"name":"ar
, Error("invalid type: null, expected a formatted date and time string or a unix timestamp", line: 1, column: 35950)
2025-11-20T16:03:40.877639Z INFO services: kubert::errors: stream failed error=failed to perform initial object list: Error deserializing response
2025-11-20T16:03:45.899912Z WARN services: kube_client::client: {"kind":"ServiceList","apiVersion":"v1","metadata":{"resourceVersion":"533369613"},"items":[{"metadata":{"name":"ar
, Error("invalid type: null, expected a formatted date and time string or a unix timestamp", line: 1, column: 35950)
2025-11-20T16:03:45.902084Z INFO services: kubert::errors: stream failed error=failed to perform initial object list: Error deserializing response
I tried deploying it on a clean 1.31 cluster, but no issues were found with the stable-2.14.10 version either.