talos icon indicating copy to clipboard operation
talos copied to clipboard

Talos Cluster unstable

Open UmanGarbag opened this issue 1 month ago • 11 comments

Bug Report

Hello guys, i have setup a talos cluster in my proxmox environnement.

Contains 3 nodes (1 cp and 2 workers)

Talos Version : 1.10.5

Description

The cluster is setup for homelab purpose, but pod keep restarting for unkown reason.

i've been trying to debug this issue with investigate in node, but didn't find error logs that are really helpful.

I have already the issue in a cluster with talos version 1.9.5, upgrading to 1.10.5 didn't fix.

You see the number of pod restart that are high. Image

Logs

pod kube-api-server

E1123 13:56:00.147794       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded" logger="UnhandledError"
E1123 13:56:00.149237       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
E1123 13:56:00.150382       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
E1123 13:56:00.151521       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
E1123 13:56:00.152789       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="4.899284ms" method="GET" path="/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pks-share-wk2" result=null
E1123 13:56:00.173687       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded" logger="UnhandledError"
E1123 13:56:00.174993       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
E1123 13:56:00.176219       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
E1123 13:56:00.177357       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
E1123 13:56:00.178594       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="4.839672ms" method="GET" path="/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pks-share-cp" result=null
E1123 13:56:00.271539       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded" logger="UnhandledError"
E1123 13:56:00.273006       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
E1123 13:56:00.274116       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
E1123 13:56:00.275257       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
E1123 13:56:00.276537       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="4.925904ms" method="GET" path="/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pks-share-wk1" result=null
E1123 13:56:00.739054       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded" logger="UnhandledError"
E1123 13:56:00.740219       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
E1123 13:56:00.741336       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
E1123 13:56:00.742430       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
E1123 13:56:00.743642       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="4.552803ms" method="GET" path="/apis/coordination.k8s.io/v1/namespaces/flux-system/leases/source-controller-leader-election" result=null
E1123 13:56:04.371033       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 6.89µs, panicked: false, err: context deadline exceeded, panic-reason: <nil>" logger="UnhandledError"
E1123 13:56:04.371445       1 controller.go:195] "Failed to update lease" err="Timeout: request did not complete within requested timeout - context deadline exceeded"
E1123 13:56:04.732903       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded" logger="UnhandledError"
E1123 13:56:04.734103       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
E1123 13:56:04.735222       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
E1123 13:56:04.736350       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
E1123 13:56:04.737610       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="4.616114ms" method="GET" path="/apis/coordination.k8s.io/v1/namespaces/external-secrets/leases/external-secrets-controller" result=null
E1123 13:56:05.467665       1 controller.go:163] "Unhandled Error" err="unable to sync kubernetes service: etcdserver: request timed out" logger="UnhandledError"
E1123 13:56:05.473234       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:\"etcdserver: request timed out\"}: etcdserver: request timed out" logger="UnhandledError"
E1123 13:56:09.024306       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 9.371µs, panicked: false, err: context deadline exceeded, panic-reason: <nil>" logger="UnhandledError"
E1123 13:56:09.281106       1 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"apiserver-vlq2olu7fdhe7ulslvmeo2jdyi\": the object has been modified; please apply your changes to the latest version and try again"
I1123 13:56:09.979262       1 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease"
E1123 13:56:22.063489       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded" logger="UnhandledError"
E1123 13:56:22.064891       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
E1123 13:56:22.066001       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
E1123 13:56:22.067322       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
E1123 13:56:22.068583       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="4.918595ms" method="GET" path="/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/cilium-operator-resource-lock" result=null
E1123 13:56:22.988211       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded" logger="UnhandledError"
E1123 13:56:22.989412       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
E1123 13:56:22.990536       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
E1123 13:56:22.991631       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
E1123 13:56:22.992857       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="4.573313ms" method="GET" path="/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/cilium-operator-resource-lock" result=null
E1123 13:56:23.341723       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded" logger="UnhandledError"
E1123 13:56:23.343398       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
E1123 13:56:23.344514       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
E1123 13:56:23.345603       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
E1123 13:56:23.346804       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="5.054818ms" method="GET" path="/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler" result=null
E1123 13:56:23.925064       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
E1123 13:56:23.925087       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 5.961µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
E1123 13:56:23.926202       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
E1123 13:56:23.927301       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
E1123 13:56:23.928483       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="3.5103ms" method="PUT" path="/apis/coordination.k8s.io/v1/namespaces/external-secrets/leases/external-secrets-controller" result=null
E1123 13:56:24.339710       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:\"etcdserver: request timed out\"}: etcdserver: request timed out" logger="UnhandledError"
E1123 13:56:25.182968       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded" logger="UnhandledError"
E1123 13:56:25.184130       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
E1123 13:56:25.185248       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
E1123 13:56:25.186352       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
E1123 13:56:25.187570       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="4.559133ms" method="GET" path="/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager" result=null
E1123 13:56:25.681498       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:\"etcdserver: request timed out\"}: etcdserver: request timed out" logger="UnhandledError"
E1123 13:56:27.191878       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:\"etcdserver: request timed out\"}: etcdserver: request timed out" logger="UnhandledError"
E1123 13:56:27.357723       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:\"etcdserver: request timed out\"}: etcdserver: request timed out" logger="UnhandledError"
E1123 13:56:27.380477       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:\"etcdserver: request timed out\"}: etcdserver: request timed out" logger="UnhandledError"
E1123 13:56:27.528384       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:\"etcdserver: request timed out\"}: etcdserver: request timed out" logger="UnhandledError"
E1123 13:56:27.529931       1 controller.go:195] "Failed to update lease" err="etcdserver: request timed out"
E1123 13:56:28.606070       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:\"etcdserver: request timed out\"}: etcdserver: request timed out" logger="UnhandledError"
E1123 13:56:28.925359       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded" logger="UnhandledError"
E1123 13:56:28.926560       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
E1123 13:56:28.927689       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
E1123 13:56:28.928785       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
E1123 13:56:28.930099       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="4.695637ms" method="GET" path="/apis/coordination.k8s.io/v1/namespaces/external-secrets/leases/external-secrets-controller" result=null
E1123 13:56:28.930729       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"context canceled\"}: context canceled" logger="UnhandledError"
E1123 13:56:28.930883       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
E1123 13:56:28.930906       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 4.19µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
E1123 13:56:28.932024       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
E1123 13:56:28.932030       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
E1123 13:56:28.933141       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
E1123 13:56:28.933177       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
E1123 13:56:28.934268       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
E1123 13:56:28.934403       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="3.613424ms" method="PATCH" path="/api/v1/namespaces/flux-system/events/flux-system.187aa751e456d9a4" result=null
E1123 13:56:28.935469       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="4.794321ms" method="GET" path="/api/v1/namespaces/flux-system/secrets/flux-system" result=null
E1123 13:56:30.756052       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"context canceled\"}: context canceled" logger="UnhandledError"
E1123 13:56:30.757252       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
E1123 13:56:30.758358       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
E1123 13:56:30.759467       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
E1123 13:56:30.760783       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="4.717918ms" method="GET" path="/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/cilium-operator-resource-lock" result=null
E1123 13:56:30.764862       1 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"apiserver-vlq2olu7fdhe7ulslvmeo2jdyi\": the object has been modified; please apply your changes to the latest version and try again"
E1123 13:56:41.629799       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
E1123 13:56:41.629858       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 6.72µs, panicked: false, err: context deadline exceeded, panic-reason: <nil>" logger="UnhandledError"
E1123 13:56:41.630018       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 2.32µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
E1123 13:56:41.630092       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
E1123 13:56:41.631136       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
E1123 13:56:41.631161       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
E1123 13:56:41.632243       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
E1123 13:56:41.632265       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
E1123 13:56:41.633453       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="3.542471ms" method="PUT" path="/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/cilium-operator-resource-lock" result=null
E1123 13:56:41.633540       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="3.80383ms" method="PUT" path="/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/cilium-operator-resource-lock" result=null
E1123 13:56:42.128489       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 6.12µs, panicked: false, err: context deadline exceeded, panic-reason: <nil>" logger="UnhandledError"
E1123 13:56:42.129051       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
E1123 13:56:42.129468       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:\"etcdserver: request timed out\"}: etcdserver: request timed out" logger="UnhandledError"
E1123 13:56:42.130115       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
E1123 13:56:42.130721       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
E1123 13:56:42.131217       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
E1123 13:56:42.132317       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
E1123 13:56:42.132386       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="3.988585ms" method="PUT" path="/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager" result=null
E1123 13:56:42.133429       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
E1123 13:56:42.134562       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="5.024007ms" method="PUT" path="/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler" result=null
E1123 13:56:43.629826       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:\"etcdserver: request timed out\"}: etcdserver: request timed out" logger="UnhandledError"
E1123 13:56:47.128247       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"context canceled\"}: context canceled" logger="UnhandledError"
E1123 13:56:47.129428       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
E1123 13:56:47.129659       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"context canceled\"}: context canceled" logger="UnhandledError"
E1123 13:56:47.130589       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
E1123 13:56:47.130957       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
E1123 13:56:47.132508       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
E1123 13:56:47.132537       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
E1123 13:56:47.133921       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"context canceled\"}: context canceled" logger="UnhandledError"
E1123 13:56:47.134197       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
E1123 13:56:47.134276       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="6.0878ms" method="GET" path="/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager" result=null
E1123 13:56:47.135362       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
E1123 13:56:47.136432       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
E1123 13:56:47.136541       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="6.803753ms" method="GET" path="/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler" result=null
E1123 13:56:47.137552       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
E1123 13:56:47.138687       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="4.795651ms" method="GET" path="/api/v1/namespaces/kube-system/serviceaccounts/validatingadmissionpolicy-status-controller" result=null
E1123 13:56:49.704959       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded" logger="UnhandledError"
E1123 13:56:49.706195       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
E1123 13:56:49.707321       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
E1123 13:56:49.708429       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
E1123 13:56:49.709664       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="4.595994ms" method="GET" path="/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/cilium-operator-resource-lock" result=null
E1123 13:56:50.818282       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded" logger="UnhandledError"
E1123 13:56:50.819449       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
E1123 13:56:50.820565       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
E1123 13:56:50.821681       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
E1123 13:56:50.822898       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="4.574323ms" method="GET" path="/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/cilium-operator-resource-lock" result=null
E1123 13:56:52.130837       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 4.32µs, panicked: false, err: context deadline exceeded, panic-reason: <nil>" logger="UnhandledError"
E1123 13:56:52.130842       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
E1123 13:56:52.131943       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
E1123 13:56:52.133039       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
E1123 13:56:52.134243       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="3.451948ms" method="PUT" path="/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler" result=null
E1123 13:56:58.064516       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded" logger="UnhandledError"
E1123 13:56:58.065712       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
E1123 13:56:58.066829       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
E1123 13:56:58.067942       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
E1123 13:56:58.069128       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="4.568523ms" method="GET" path="/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/cilium-operator-resource-lock" result=null
E1123 13:56:59.098367       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded" logger="UnhandledError"
E1123 13:56:59.099554       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
E1123 13:56:59.100671       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
E1123 13:56:59.101782       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
E1123 13:56:59.102998       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="4.562962ms" method="GET" path="/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/cilium-operator-resource-lock" result=null
E1123 13:56:59.246776       1 controller.go:163] "Unhandled Error" err="unable to sync kubernetes service: etcdserver: request timed out" logger="UnhandledError"
E1123 13:56:59.248855       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:\"etcdserver: request timed out\"}: etcdserver: request timed out" logger="UnhandledError"
E1123 13:56:59.250236       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:\"etcdserver: request timed out\"}: etcdserver: request timed out" logger="UnhandledError"
E1123 13:56:59.250278       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:\"etcdserver: request timed out\"}: etcdserver: request timed out" logger="UnhandledError"
E1123 13:57:25.183449       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"context canceled\"}: context canceled" logger="UnhandledError"
E1123 13:57:25.184629       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
E1123 13:57:25.185748       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
E1123 13:57:25.186863       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
E1123 13:57:25.188129       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="4.705937ms" method="GET" path="/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/cilium-operator-resource-lock" result=null
E1123 13:57:27.933245       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:\"etcdserver: request timed out\"}: etcdserver: request timed out" logger="UnhandledError"
E1123 13:57:27.987961       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:\"etcdserver: request timed out\"}: etcdserver: request timed out" logger="UnhandledError"
E1123 13:57:27.989487       1 controller.go:195] "Failed to update lease" err="etcdserver: request timed out"
E1123 13:57:28.232195       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:\"etcdserver: request timed out\"}: etcdserver: request timed out" logger="UnhandledError"
E1123 13:57:28.336933       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:\"etcdserver: request timed out\"}: etcdserver: request timed out" logger="UnhandledError"
E1123 13:57:28.336946       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:\"etcdserver: request timed out\"}: etcdserver: request timed out" logger="UnhandledError"
E1123 13:57:28.369288       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
E1123 13:57:28.369295       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 8.43µs, panicked: false, err: context deadline exceeded, panic-reason: <nil>" logger="UnhandledError"
E1123 13:57:28.370398       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
E1123 13:57:28.371436       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
E1123 13:57:28.373704       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="4.646745ms" method="PUT" path="/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/cilium-operator-resource-lock" result=null
E1123 13:57:28.709832       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded" logger="UnhandledError"
E1123 13:57:28.711491       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
E1123 13:57:28.712602       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
E1123 13:57:28.713705       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
E1123 13:57:28.714928       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="5.020027ms" method="GET" path="/apis/coordination.k8s.io/v1/namespaces/external-secrets/leases/external-secrets-controller" result=null
E1123 13:57:30.639667       1 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"apiserver-vlq2olu7fdhe7ulslvmeo2jdyi\": the object has been modified; please apply your changes to the latest version and try again"
E1123 14:09:37.502924       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
E1123 14:09:37.502925       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 5.94µs, panicked: false, err: context deadline exceeded, panic-reason: <nil>" logger="UnhandledError"
E1123 14:09:37.504056       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
E1123 14:09:37.505735       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
E1123 14:09:37.507002       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="4.104488ms" method="PUT" path="/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler" result=null
E1123 14:09:37.507039       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
E1123 14:09:37.507053       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 3.68µs, panicked: false, err: context deadline exceeded, panic-reason: <nil>" logger="UnhandledError"
E1123 14:09:37.507137       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
E1123 14:09:37.507153       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 2.25µs, panicked: false, err: context deadline exceeded, panic-reason: <nil>" logger="UnhandledError"
E1123 14:09:37.507947       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
E1123 14:09:37.508015       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 2.71µs, panicked: false, err: context deadline exceeded, panic-reason: <nil>" logger="UnhandledError"
E1123 14:09:37.508160       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
E1123 14:09:37.508179       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
E1123 14:09:37.509125       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
E1123 14:09:37.509308       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
E1123 14:09:37.509464       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
E1123 14:09:37.510476       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="3.392516ms" method="PUT" path="/apis/coordination.k8s.io/v1/namespaces/flux-system/leases/flux-operator" result=null
E1123 14:09:37.510557       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="3.600663ms" method="PUT" path="/apis/coordination.k8s.io/v1/namespaces/external-secrets/leases/external-secrets-controller" result=null
E1123 14:09:37.510961       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
E1123 14:09:37.512108       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="4.140318ms" method="PUT" path="/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/cilium-operator-resource-lock" result=null

pod kube-controller-manager-pks-share-cp

│ I1123 14:02:15.333933       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""                                                                                           │
│ I1123 14:02:15.333977       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"                                                                                                                           │
│ I1123 14:02:15.333979       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pks-share-wk2"                                                             │
│ I1123 14:02:15.333982       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator                                                                                                                                            │
│ I1123 14:02:15.333999       1 shared_informer.go:320] Caches are synced for cidrallocator                                                                                                                                                     │
│ I1123 14:02:15.334042       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pks-share-wk1"                                                             │
│ I1123 14:02:15.334051       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pks-share-cp"                                                              │
│ I1123 14:02:15.334073       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pks-share-wk2"                                                                                                                 │
│ I1123 14:02:15.334084       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pks-share-cp"                                                                                                                  │
│ I1123 14:02:15.334079       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"                                                             │
│ I1123 14:02:15.334101       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pks-share-wk1"                                                                                                                 │
│ I1123 14:02:15.335249       1 shared_informer.go:320] Caches are synced for resource quota                                                                                                                                                    │
│ I1123 14:02:15.337370       1 shared_informer.go:320] Caches are synced for expand                                                                                                                                                            │
│ I1123 14:02:15.338567       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator                                                                                                                                             │
│ I1123 14:02:15.339807       1 shared_informer.go:320] Caches are synced for job                                                                                                                                                               │
│ I1123 14:02:15.342037       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client                                                                                                                             │
│ I1123 14:02:15.342070       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving                                                                                                                            │
│ I1123 14:02:15.343140       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client                                                                                                                      │
│ I1123 14:02:15.343161       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown                                                                                                                             │
│ I1123 14:02:15.345320       1 shared_informer.go:320] Caches are synced for certificate-csrapproving                                                                                                                                          │
│ I1123 14:02:15.351511       1 shared_informer.go:320] Caches are synced for HPA                                                                                                                                                               │
│ I1123 14:02:15.352643       1 shared_informer.go:320] Caches are synced for resource quota                                                                                                                                                    │
│ I1123 14:02:15.353677       1 shared_informer.go:320] Caches are synced for disruption                                                                                                                                                        │
│ I1123 14:02:15.355858       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner                                                                                                                              │
│ I1123 14:02:15.356986       1 shared_informer.go:320] Caches are synced for endpoint                                                                                                                                                          │
│ I1123 14:02:15.363496       1 shared_informer.go:320] Caches are synced for garbage collector                                                                                                                                                 │
│ I1123 14:04:42.011668       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pks-share-cp"                                                                                                                  │
│ I1123 14:05:09.176375       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pks-share-wk2"                                                                                                                 │
│ I1123 14:05:11.890954       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pks-share-wk1"                                                                                                                 │
│ I1123 14:09:49.409012       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pks-share-cp"                                                                                                                  │
│ I1123 14:10:15.140638       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pks-share-wk2"                                                                                                                 │
│ I1123 14:10:19.720270       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pks-share-wk1"                                                                                                                 │
│ I1123 14:14:55.525373       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pks-share-cp"                                                                                                                  │
│ I1123 14:15:20.976962       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pks-share-wk2"                                                                                                                 │
│ I1123 14:15:25.453922       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pks-share-wk1"                                                                                                                 │
│ I1123 14:20:00.867847       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pks-share-cp"                                                                                                                  │
│ I1123 14:20:27.508014       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pks-share-wk2"                                                                                                                 │
│ I1123 14:20:31.180503       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pks-share-wk1"                                                                                                                 │
│ I1123 14:25:06.513713       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pks-share-cp"                                                                                                                  │
│ I1123 14:25:34.528075       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pks-share-wk2"                                                                                                                 │
│ I1123 14:25:37.234529       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pks-share-wk1

pod kube-scheduler-pks-share-cp

I1123 14:02:00.100723       1 serving.go:386] Generated self-signed cert in-memory
I1123 14:02:00.553489       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.0"
I1123 14:02:00.553506       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1123 14:02:00.556904       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1123 14:02:00.556926       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1123 14:02:00.557021       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
I1123 14:02:00.557215       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
I1123 14:02:00.557435       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
I1123 14:02:00.557621       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
I1123 14:02:00.557437       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
I1123 14:02:00.557714       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I1123 14:02:00.657141       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1123 14:02:00.657986       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I1123 14:02:00.657985       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
I1123 14:02:00.658110       1 leaderelection.go:257] attempting to acquire leader lease kube-system/kube-scheduler...
I1123 14:02:16.682855       1 leaderelection.go:271] successfully acquired lease kube-system/kube-scheduler
E1123 14:09:37.503221       1 leaderelection.go:429] Failed to update lock optimistically: the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io kube-scheduler), falling back to slow path

Event

kube-scheduler-pks-share-cp

I1123 14:02:00.100723       1 serving.go:386] Generated self-signed cert in-memory
I1123 14:02:00.553489       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.0"
I1123 14:02:00.553506       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1123 14:02:00.556904       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1123 14:02:00.556926       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1123 14:02:00.557021       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
I1123 14:02:00.557215       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
I1123 14:02:00.557435       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
I1123 14:02:00.557621       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
I1123 14:02:00.557437       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
I1123 14:02:00.557714       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I1123 14:02:00.657141       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1123 14:02:00.657986       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I1123 14:02:00.657985       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
I1123 14:02:00.658110       1 leaderelection.go:257] attempting to acquire leader lease kube-system/kube-scheduler...
I1123 14:02:16.682855       1 leaderelection.go:271] successfully acquired lease kube-system/kube-scheduler
E1123 14:09:37.503221       1 leaderelection.go:429] Failed to update lock optimistically: the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io kube-scheduler), falling back to slow path

kube-controller-manager-pks-share-cp

│   Warning  Failed     40m (x2 over 15h)    kubelet  Error: context deadline exceeded                                                                                                                                                          │
│   Warning  Failed     39m (x3 over 40m)    kubelet  Error: failed to reserve container name "kube-controller-manager_kube-controller-manager-pks-share-cp_kube-system_44a3668a5a412b2e88dbfa21382f571c_44": name "kube-controller-manager_kub │
│ e-controller-manager-pks-share-cp_kube-system_44a3668a5a412b2e88dbfa21382f571c_44" is reserved for "79e87b29033916f3bf4726a6bfe01fb2e6a825ba688c58ae11ae710dfe253cd7"                                                                         │
│   Warning  Unhealthy  35m (x32 over 4d2h)  kubelet  Liveness probe failed: Get "https://localhost:10257/healthz": dial tcp [::1]:10257: connect: connection refused                                                                           │
│   Warning  BackOff    30m (x52 over 4d6h)  kubelet  Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-pks-share-cp_kube-system(44a3668a5a412b2e88dbfa21382f571c)                                    │
│   Normal   Pulled     30m (x50 over 4d6h)  kubelet  Container image "registry.k8s.io/kube-controller-manager:v1.32.0" already present on machine                                                                                              │
│   Normal   Created    30m (x45 over 4d6h)  kubelet  Created container: kube-controller-manager                                                                                                                                                │
│   Normal   Started    30m (x45 over 4d6h)  kubelet  Started container kube-controller-manager 

Environment

  • Talos Version : 1.10.5
  • Kubernetes version: v1.32.0
  • Platform: Proxmox Version : 8.2.4

Contact

If someone want to help me troubleshoot this issue, i can provider more information here or on discord

UmanGarbag avatar Nov 23 '25 14:11 UmanGarbag

Most probably you don't have enough resources allocated for controlplane nodes, etcd is having issues taking down apiserver and everything else down the chain.

This is not an issue with Talos Linux.

smira avatar Nov 26 '25 11:11 smira

Hello,

My system ressources are :

Control Plane :

4 CPU 3 GB Ram

Worker Nodes : 2 CPU 2 GB Ram

It is not enough ?

UmanGarbag avatar Nov 26 '25 11:11 UmanGarbag

Whether it's enough or not depends on your workload, and also these are VMs, so you can easily overcommit on memory.

smira avatar Nov 26 '25 12:11 smira

Ok i augment per 2gb of ram for cp and worker nodes. We will see if it's enough

Actually i just have external secret, fluxcd, cilium running

UmanGarbag avatar Nov 26 '25 12:11 UmanGarbag

Hello,

After upgrading all the ram and cpu for control plane and worker nodes :

  • Control Plane with 8Go
  • Worker Nodes with 6Go

I have less restart but always some pod are restarting :

Image

For example the log in a cilium operator pod :

Image

Can someone have advice on how troubleshoot this ? I can't use a cluster that so unstable

Thanks !

UmanGarbag avatar Dec 04 '25 19:12 UmanGarbag

What are the etcd logs? This problem looks similar to other issues I've seen when a disk is too slow or has errors and etcd cannot write to the disk fast enough.

Are you running from USB drives or sd cards? I've seen the problem happen most common in that situation.

rothgar avatar Dec 05 '25 01:12 rothgar

Hello,

I will provider etcd logs later.

Actually this are virtual machine running in proxmox. The data of the virtual machine are store in Truenas with ssd disk.

I check the virtual machine disk and it's not full

For example when running a kubernetes cluster installed with kubespray i didnt have this problem.

UmanGarbag avatar Dec 06 '25 14:12 UmanGarbag

Hello,

You can find the logs of the etcd service here, from this command :

talosctl -e 10.13.1.90 -n 10.13.1.90 logs etcd -f

10.13.1.90: {"level":"info","ts":"2025-12-07T17:34:21.247979Z","caller":"traceutil/trace.go:171","msg":"trace[255747227] 
transaction","detail":"{read_only:false; response_revision:22967127; number_of_response:1; }","duration":"259.001073ms","start":"2025-12-07T17:34:20.988965Z","end":"2025-12-07T17:34:21.247966Z","steps":["trace[255747227] 'process raft request'  (duration: 258.90112ms)"],"step_count":1}
10.13.1.90: {"level":"warn","ts":"2025-12-07T17:34:21.248032Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"196.046389ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/\" range_end:\"/registry/namespaces0\" count_only:true ","response":"range_response_count:0 size:9"}
10.13.1.90: {"level":"info","ts":"2025-12-07T17:34:21.248051Z","caller":"traceutil/trace.go:171","msg":"trace[1399129985] range","detail":"{range_begin:/registry/namespaces/; range_end:/registry/namespaces0; response_count:0; response_revision:22967127; }","duration":"196.091791ms","start":"2025-12-07T17:34:21.051954Z","end":"2025-12-07T17:34:21.248046Z","steps":["trace[1399129985] 'agreement among raft nodes before linearized reading'  (duration: 196.04302ms)"],"step_count":1}
10.13.1.90: {"level":"info","ts":"2025-12-07T17:34:21.558030Z","caller":"traceutil/trace.go:171","msg":"trace[293967807] linearizableReadLoop","detail":"{readStateIndex:24270068; appliedIndex:24270067; }","duration":"243.684263ms","start":"2025-12-07T17:34:21.314330Z","end":"2025-12-07T17:34:21.558014Z","steps":["trace[293967807] 'read index received'  (duration: 234.477659ms)","trace[293967807] 'applied index is now lower than readState.Index'  (duration: 9.205434ms)"],"step_count":2}
10.13.1.90: {"level":"warn","ts":"2025-12-07T17:34:21.558139Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"243.799426ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/kustomize.toolkit.fluxcd.io/kustomizations/\" range_end:\"/registry/kustomize.toolkit.fluxcd.io/kustomizations0\" count_only:true ","response":"range_response_count:0 size:9"}
10.13.1.90: {"level":"info","ts":"2025-12-07T17:34:21.558162Z","caller":"traceutil/trace.go:171","msg":"trace[465374173] range","detail":"{range_begin:/registry/kustomize.toolkit.fluxcd.io/kustomizations/; range_end:/registry/kustomize.toolkit.fluxcd.io/kustomizations0; response_count:0; response_revision:22967128; }","duration":"243.849258ms","start":"2025-12-07T17:34:21.314305Z","end":"2025-12-07T17:34:21.558155Z","steps":["trace[465374173] 'agreement among raft nodes before linearized reading'  (duration: 243.768546ms)"],"step_count":1}
10.13.1.90: {"level":"info","ts":"2025-12-07T17:34:21.558196Z","caller":"traceutil/trace.go:171","msg":"trace[1138400166] transaction","detail":"{read_only:false; response_revision:22967128; number_of_response:1; }","duration":"340.641791ms","start":"2025-12-07T17:34:21.217546Z","end":"2025-12-07T17:34:21.558188Z","steps":["trace[1138400166] 'process raft request'  (duration: 331.271263ms)"],"step_count":1}
10.13.1.90: {"level":"warn","ts":"2025-12-07T17:34:21.558253Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-07T17:34:21.217533Z","time spent":"340.684583ms","remote":"[::1]:59236","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":676,"response count":0,"response size":44,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-vlq2olu7fdhe7ulslvmeo2jdyi\" mod_revision:22967089 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-vlq2olu7fdhe7ulslvmeo2jdyi\" value_size:603 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-vlq2olu7fdhe7ulslvmeo2jdyi\" > >"}
10.13.1.90: {"level":"info","ts":"2025-12-07T17:34:21.559401Z","caller":"traceutil/trace.go:171","msg":"trace[2116632574] transaction","detail":"{read_only:false; response_revision:22967129; number_of_response:1; }","duration":"136.577805ms","start":"2025-12-07T17:34:21.422814Z","end":"2025-12-07T17:34:21.559392Z","steps":["trace[2116632574] 'process raft request'  (duration: 136.515463ms)"],"step_count":1}
10.13.1.90: {"level":"info","ts":"2025-12-07T17:34:21.559484Z","caller":"traceutil/trace.go:171","msg":"trace[169763537] transaction","detail":"{read_only:false; response_revision:22967130; number_of_response:1; }","duration":"121.641875ms","start":"2025-12-07T17:34:21.437835Z","end":"2025-12-07T17:34:21.559477Z","steps":["trace[169763537] 'process raft request'  (duration: 121.539422ms)"],"step_count":1}
10.13.1.90: {"level":"info","ts":"2025-12-07T17:34:21.559580Z","caller":"traceutil/trace.go:171","msg":"trace[476051566] transaction","detail":"{read_only:false; response_revision:22967131; number_of_response:1; }","duration":"120.381916ms","start":"2025-12-07T17:34:21.439193Z","end":"2025-12-07T17:34:21.559575Z","steps":["trace[476051566] 'process raft request'  (duration: 120.262533ms)"],"step_count":1}
10.13.1.90: {"level":"info","ts":"2025-12-07T17:34:50.086472Z","caller":"traceutil/trace.go:171","msg":"trace[1151005879] transaction","detail":"{read_only:false; response_revision:22967231; number_of_response:1; }","duration":"731.014643ms","start":"2025-12-07T17:34:49.355446Z","end":"2025-12-07T17:34:50.086460Z","steps":["trace[1151005879] 'process raft request'  (duration: 730.948421ms)"],"step_count":1}
10.13.1.90: {"level":"warn","ts":"2025-12-07T17:34:50.086552Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-07T17:34:49.355432Z","time spent":"731.081595ms","remote":"[::1]:59236","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":528,"response count":0,"response size":44,"request content":"compare:<target:MOD key:\"/registry/leases/external-secrets/external-secrets-controller\" mod_revision:22967224 > success:<request_put:<key:\"/registry/leases/external-secrets/external-secrets-controller\" value_size:459 >> failure:<request_range:<key:\"/registry/leases/external-secrets/external-secrets-controller\" > >"}
10.13.1.90: {"level":"info","ts":"2025-12-07T17:34:50.217130Z","caller":"traceutil/trace.go:171","msg":"trace[1862173767] transaction","detail":"{read_only:false; response_revision:22967233; number_of_response:1; }","duration":"566.429138ms","start":"2025-12-07T17:34:49.650689Z","end":"2025-12-07T17:34:50.217118Z","steps":["trace[1862173767] 'process raft request'  (duration: 566.388356ms)"],"step_count":1}
10.13.1.90: {"level":"warn","ts":"2025-12-07T17:34:50.217204Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-07T17:34:49.650679Z","time spent":"566.490739ms","remote":"[::1]:59236","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":482,"response count":0,"response size":44,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/cilium-operator-resource-lock\" mod_revision:22967227 > success:<request_put:<key:\"/registry/leases/kube-system/cilium-operator-resource-lock\" value_size:416 >> failure:<request_range:<key:\"/registry/leases/kube-system/cilium-operator-resource-lock\" > >"}
10.13.1.90: {"level":"info","ts":"2025-12-07T17:34:50.217295Z","caller":"traceutil/trace.go:171","msg":"trace[1805880654] transaction","detail":"{read_only:false; response_revision:22967232; number_of_response:1; }","duration":"570.019018ms","start":"2025-12-07T17:34:49.647263Z","end":"2025-12-07T17:34:50.217282Z","steps":["trace[1805880654] 'process raft request'  (duration: 500.599616ms)","trace[1805880654] 'compare'  (duration: 69.154554ms)"],"step_count":2}
10.13.1.90: {"level":"info","ts":"2025-12-07T17:34:50.217336Z","caller":"traceutil/trace.go:171","msg":"trace[1977213835] transaction","detail":"{read_only:false; response_revision:22967239; number_of_response:1; }","duration":"126.173584ms","start":"2025-12-07T17:34:50.091153Z","end":"2025-12-07T17:34:50.217327Z","steps":["trace[1977213835] 'process raft request'  (duration: 126.161144ms)"],"step_count":1}
10.13.1.90: {"level":"info","ts":"2025-12-07T17:34:50.217354Z","caller":"traceutil/trace.go:171","msg":"trace[117413479] transaction","detail":"{read_only:false; response_revision:22967236; number_of_response:1; }","duration":"447.851705ms","start":"2025-12-07T17:34:49.769496Z","end":"2025-12-07T17:34:50.217347Z","steps":["trace[117413479] 'process raft request'  (duration: 447.774832ms)"],"step_count":1}
10.13.1.90: {"level":"warn","ts":"2025-12-07T17:34:50.217359Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-07T17:34:49.647249Z","time spent":"570.07799ms","remote":"[::1]:59236","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":496,"response count":0,"response size":44,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/kube-controller-manager\" mod_revision:22967226 > success:<request_put:<key:\"/registry/leases/kube-system/kube-controller-manager\" value_size:436 >> failure:<request_range:<key:\"/registry/leases/kube-system/kube-controller-manager\" > >"}
10.13.1.90: {"level":"warn","ts":"2025-12-07T17:34:50.217387Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-07T17:34:49.769491Z","time spent":"447.881376ms","remote":"[::1]:59236","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":523,"response count":0,"response size":44,"request content":"compare:<target:MOD key:\"/registry/leases/flux-system/helm-controller-leader-election\" mod_revision:22967216 > success:<request_put:<key:\"/registry/leases/flux-system/helm-controller-leader-election\" value_size:455 >> failure:<request_range:<key:\"/registry/leases/flux-system/helm-controller-leader-election\" > >"}
10.13.1.90: {"level":"info","ts":"2025-12-07T17:34:50.217420Z","caller":"traceutil/trace.go:171","msg":"trace[1086450144] transaction","detail":"{read_only:false; response_revision:22967238; number_of_response:1; }","duration":"445.617706ms","start":"2025-12-07T17:34:49.771798Z","end":"2025-12-07T17:34:50.217416Z","steps":["trace[1086450144] 'process raft request'  (duration: 445.501392ms)"],"step_count":1}
10.13.1.90: {"level":"info","ts":"2025-12-07T17:34:50.217424Z","caller":"traceutil/trace.go:171","msg":"trace[362774970] linearizableReadLoop","detail":"{readStateIndex:24270179; appliedIndex:24270176; }","duration":"528.607396ms","start":"2025-12-07T17:34:49.688812Z","end":"2025-12-07T17:34:50.217419Z","steps":["trace[362774970] 'read index received'  (duration: 397.648744ms)","trace[362774970] 'applied index is now lower than readState.Index'  (duration: 130.958272ms)"],"step_count":2}
10.13.1.90: {"level":"warn","ts":"2025-12-07T17:34:50.217441Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-07T17:34:49.771792Z","time spent":"445.638786ms","remote":"[::1]:59236","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":555,"response count":0,"response size":44,"request content":"compare:<target:MOD key:\"/registry/leases/flux-system/notification-controller-leader-election\" mod_revision:22967217 > success:<request_put:<key:\"/registry/leases/flux-system/notification-controller-leader-election\" value_size:479 >> failure:<request_range:<key:\"/registry/leases/flux-system/notification-controller-leader-election\" > >"}
10.13.1.90: {"level":"info","ts":"2025-12-07T17:34:50.217449Z","caller":"traceutil/trace.go:171","msg":"trace[1316091556] transaction","detail":"{read_only:false; response_revision:22967234; number_of_response:1; }","duration":"566.754818ms","start":"2025-12-07T17:34:49.650689Z","end":"2025-12-07T17:34:50.217444Z","steps":["trace[1316091556] 'process raft request'  (duration: 566.412597ms)"],"step_count":1}
10.13.1.90: {"level":"warn","ts":"2025-12-07T17:34:50.217470Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-07T17:34:49.650679Z","time spent":"566.780428ms","remote":"[::1]:59236","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":484,"response count":0,"response size":44,"request content":"compare:<target:MOD key:\"/registry/leases/flux-system/flux-operator\" mod_revision:22967228 > success:<request_put:<key:\"/registry/leases/flux-system/flux-operator\" value_size:434 >> failure:<request_range:<key:\"/registry/leases/flux-system/flux-operator\" > >"}
10.13.1.90: {"level":"info","ts":"2025-12-07T17:34:50.217476Z","caller":"traceutil/trace.go:171","msg":"trace[742552599] transaction","detail":"{read_only:false; response_revision:22967237; number_of_response:1; }","duration":"447.921818ms","start":"2025-12-07T17:34:49.769550Z","end":"2025-12-07T17:34:50.217472Z","steps":["trace[742552599] 'process raft request'  (duration: 447.733172ms)"],"step_count":1}
10.13.1.90: {"level":"warn","ts":"2025-12-07T17:34:50.217498Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-07T17:34:49.769539Z","time spent":"447.947798ms","remote":"[::1]:59236","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":542,"response count":0,"response size":44,"request content":"compare:<target:MOD key:\"/registry/leases/flux-system/kustomize-controller-leader-election\" mod_revision:22967215 > success:<request_put:<key:\"/registry/leases/flux-system/kustomize-controller-leader-election\" value_size:469 >> failure:<request_range:<key:\"/registry/leases/flux-system/kustomize-controller-leader-election\" > >"}
10.13.1.90: {"level":"info","ts":"2025-12-07T17:34:50.217523Z","caller":"traceutil/trace.go:171","msg":"trace[1425779940] transaction","detail":"{read_only:false; response_revision:22967235; number_of_response:1; }","duration":"449.467105ms","start":"2025-12-07T17:34:49.768051Z","end":"2025-12-07T17:34:50.217519Z","steps":["trace[1425779940] 'process raft request'  (duration: 449.174826ms)"],"step_count":1}
10.13.1.90: {"level":"warn","ts":"2025-12-07T17:34:50.217526Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"528.706929ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/cilium-operator-resource-lock\" limit:1 ","response":"range_response_count:1 size:504"}
10.13.1.90: {"level":"warn","ts":"2025-12-07T17:34:50.217543Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-07T17:34:49.768043Z","time spent":"449.489405ms","remote":"[::1]:59236","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":531,"response count":0,"response size":44,"request content":"compare:<target:MOD key:\"/registry/leases/flux-system/source-controller-leader-election\" mod_revision:22967214 > success:<request_put:<key:\"/registry/leases/flux-system/source-controller-leader-election\" value_size:461 >> failure:<request_range:<key:\"/registry/leases/flux-system/source-controller-leader-election\" > >"}
10.13.1.90: {"level":"info","ts":"2025-12-07T17:34:50.217548Z","caller":"traceutil/trace.go:171","msg":"trace[1352203982] range","detail":"{range_begin:/registry/leases/kube-system/cilium-operator-resource-lock; range_end:; response_count:1; response_revision:22967239; }","duration":"528.749551ms","start":"2025-12-07T17:34:49.688792Z","end":"2025-12-07T17:34:50.217541Z","steps":["trace[1352203982] 'agreement among raft nodes before linearized reading'  (duration: 528.698399ms)"],"step_count":1}
10.13.1.90: {"level":"warn","ts":"2025-12-07T17:34:50.217567Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-07T17:34:49.688786Z","time spent":"528.774891ms","remote":"[::1]:59236","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":1,"response size":528,"request content":"key:\"/registry/leases/kube-system/cilium-operator-resource-lock\" limit:1 "}
10.13.1.90: {"level":"warn","ts":"2025-12-07T17:34:50.217618Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.195678ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"health\" ","response":"range_response_count:0 size:7"}
10.13.1.90: {"level":"info","ts":"2025-12-07T17:34:50.217629Z","caller":"traceutil/trace.go:171","msg":"trace[1148299908] range","detail":"{range_begin:health; range_end:; response_count:0; response_revision:22967239; }","duration":"115.223658ms","start":"2025-12-07T17:34:50.102403Z","end":"2025-12-07T17:34:50.217626Z","steps":["trace[1148299908] 'agreement among raft nodes before linearized reading'  (duration: 115.203608ms)"],"step_count":1}
10.13.1.90: {"level":"info","ts":"2025-12-07T17:34:52.357595Z","caller":"etcdserver/corrupt.go:276","msg":"starting compact hash check","local-member-id":"db3db3b6269f352b","timeout":"7s"}
10.13.1.90: {"level":"info","ts":"2025-12-07T17:34:52.357645Z","caller":"etcdserver/corrupt.go:335","msg":"finished compaction hash check","number-of-hashes-checked":10}
10.13.1.90: {"level":"info","ts":"2025-12-07T17:35:01.606107Z","caller":"traceutil/trace.go:171","msg":"trace[821873647] linearizableReadLoop","detail":"{readStateIndex:24270227; appliedIndex:24270226; }","duration":"374.160924ms","start":"2025-12-07T17:35:01.231935Z","end":"2025-12-07T17:35:01.606096Z","steps":["trace[821873647] 'read index received'  (duration: 374.062221ms)","trace[821873647] 'applied index is now lower than readState.Index'  (duration: 98.373µs)"],"step_count":2}
10.13.1.90: {"level":"info","ts":"2025-12-07T17:35:01.606135Z","caller":"traceutil/trace.go:171","msg":"trace[448803197] transaction","detail":"{read_only:false; response_revision:22967279; number_of_response:1; }","duration":"530.667681ms","start":"2025-12-07T17:35:01.075451Z","end":"2025-12-07T17:35:01.606119Z","steps":["trace[448803197] 'process raft request'  (duration: 530.571908ms)"],"step_count":1}
10.13.1.90: {"level":"warn","ts":"2025-12-07T17:35:01.606181Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"374.239586ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses/\" range_end:\"/registry/storageclasses0\" count_only:true ","response":"range_response_count:0 size:7"}
10.13.1.90: {"level":"info","ts":"2025-12-07T17:35:01.606196Z","caller":"traceutil/trace.go:171","msg":"trace[1521313645] range","detail":"{range_begin:/registry/storageclasses/; range_end:/registry/storageclasses0; response_count:0; response_revision:22967279; }","duration":"374.281727ms","start":"2025-12-07T17:35:01.231910Z","end":"2025-12-07T17:35:01.606192Z","steps":["trace[1521313645] 'agreement among raft nodes before linearized reading'  (duration: 374.245856ms)"],"step_count":1}
10.13.1.90: {"level":"warn","ts":"2025-12-07T17:35:01.606207Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-07T17:35:01.075437Z","time spent":"530.731422ms","remote":"[::1]:59236","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":540,"response count":0,"response size":44,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/pks-share-wk2\" mod_revision:22967241 > success:<request_put:<key:\"/registry/leases/kube-node-lease/pks-share-wk2\" value_size:486 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/pks-share-wk2\" > >"}
10.13.1.90: {"level":"warn","ts":"2025-12-07T17:35:01.606217Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-07T17:35:01.231902Z","time spent":"374.310649ms","remote":"[::1]:59330","response type":"/etcdserverpb.KV/Range","request count":0,"request size":56,"response count":0,"response size":31,"request content":"key:\"/registry/storageclasses/\" range_end:\"/registry/storageclasses0\" count_only:true "}
10.13.1.90: {"level":"warn","ts":"2025-12-07T17:35:01.606217Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.43268ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/cilium.io/ciliumendpoints/\" range_end:\"/registry/cilium.io/ciliumendpoints0\" count_only:true ","response":"range_response_count:0 size:9"}
10.13.1.90: {"level":"info","ts":"2025-12-07T17:35:01.606237Z","caller":"traceutil/trace.go:171","msg":"trace[168356153] range","detail":"{range_begin:/registry/cilium.io/ciliumendpoints/; range_end:/registry/cilium.io/ciliumendpoints0; response_count:0; response_revision:22967279; }","duration":"108.473422ms","start":"2025-12-07T17:35:01.497758Z","end":"2025-12-07T17:35:01.606232Z","steps":["trace[168356153] 'agreement among raft nodes before linearized reading'  (duration: 108.430321ms)"],"step_count":1}
10.13.1.90: {"level":"info","ts":"2025-12-07T17:35:11.986841Z","caller":"traceutil/trace.go:171","msg":"trace[891622868] transaction","detail":"{read_only:false; response_revision:22967318; number_of_response:1; }","duration":"281.508227ms","start":"2025-12-07T17:35:11.705320Z","end":"2025-12-07T17:35:11.986828Z","steps":["trace[891622868] 'process raft request'  (duration: 281.444005ms)"],"step_count":1}
10.13.1.90: {"level":"info","ts":"2025-12-07T17:35:24.748860Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":22966231}
10.13.1.90: {"level":"info","ts":"2025-12-07T17:35:24.778651Z","caller":"mvcc/kvstore_compaction.go:71","msg":"finished scheduled compaction","compact-revision":22966231,"took":"29.586958ms","hash":517312133,"current-db-size-bytes":25812992,"current-db-size":"26 MB","current-db-size-in-use-bytes":10346496,"current-db-size-in-use":"10 MB"}
10.13.1.90: {"level":"info","ts":"2025-12-07T17:35:24.778684Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":517312133,"revision":22966231,"compact-revision":22965088}
10.13.1.90: {"level":"info","ts":"2025-12-07T17:35:49.732854Z","caller":"traceutil/trace.go:171","msg":"trace[1873774840] linearizableReadLoop","detail":"{readStateIndex:24270419; appliedIndex:24270418; }","duration":"305.898148ms","start":"2025-12-07T17:35:49.426943Z","end":"2025-12-07T17:35:49.732841Z","steps":["trace[1873774840] 'read index received'  (duration: 305.814185ms)","trace[1873774840] 'applied index is now lower than readState.Index'  (duration: 83.263µs)"],"step_count":2}
10.13.1.90: {"level":"info","ts":"2025-12-07T17:35:49.732891Z","caller":"traceutil/trace.go:171","msg":"trace[2146430856] transaction","detail":"{read_only:false; response_revision:22967460; number_of_response:1; }","duration":"396.036777ms","start":"2025-12-07T17:35:49.336846Z","end":"2025-12-07T17:35:49.732883Z","steps":["trace[2146430856] 'process raft request'  (duration: 395.909913ms)"],"step_count":1}
10.13.1.90: {"level":"warn","ts":"2025-12-07T17:35:49.732950Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"305.993911ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/cilium-operator-resource-lock\" limit:1 ","response":"range_response_count:1 size:504"}
10.13.1.90: {"level":"info","ts":"2025-12-07T17:35:49.732970Z","caller":"traceutil/trace.go:171","msg":"trace[1206537851] range","detail":"{range_begin:/registry/leases/kube-system/cilium-operator-resource-lock; range_end:; response_count:1; response_revision:22967460; }","duration":"306.033093ms","start":"2025-12-07T17:35:49.426932Z","end":"2025-12-07T17:35:49.732965Z","steps":["trace[1206537851] 'agreement among raft nodes before linearized reading'  (duration: 305.978391ms)"],"step_count":1}
10.13.1.90: {"level":"warn","ts":"2025-12-07T17:35:49.732995Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-07T17:35:49.336838Z","time spent":"396.12616ms","remote":"[::1]:59236","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":541,"response count":0,"response size":44,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/pks-share-wk1\" mod_revision:22967422 > success:<request_put:<key:\"/registry/leases/kube-node-lease/pks-share-wk1\" value_size:487 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/pks-share-wk1\" > >"}
10.13.1.90: {"level":"warn","ts":"2025-12-07T17:35:49.732993Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-07T17:35:49.426927Z","time spent":"306.056983ms","remote":"[::1]:59236","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":1,"response size":528,"request content":"key:\"/registry/leases/kube-system/cilium-operator-resource-lock\" limit:1 "}
10.13.1.90: {"level":"info","ts":"2025-12-07T17:35:52.358097Z","caller":"etcdserver/corrupt.go:276","msg":"starting compact hash check","local-member-id":"db3db3b6269f352b","timeout":"7s"}
10.13.1.90: {"level":"info","ts":"2025-12-07T17:35:52.358135Z","caller":"etcdserver/corrupt.go:335","msg":"finished compaction hash check","number-of-hashes-checked":10}
10.13.1.90: {"level":"warn","ts":"2025-12-07T17:36:46.046278Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"676.895324ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3831326220305464697 > lease_revoke:<id:352b9ae5ebf27d03>","response":"size:31"}
10.13.1.90: {"level":"info","ts":"2025-12-07T17:36:46.046691Z","caller":"traceutil/trace.go:171","msg":"trace[1987688586] transaction","detail":"{read_only:false; response_revision:22967675; number_of_response:1; }","duration":"703.824821ms","start":"2025-12-07T17:36:45.342670Z","end":"2025-12-07T17:36:46.046495Z","steps":["trace[1987688586] 'process raft request'  (duration: 703.687867ms)"],"step_count":1}
10.13.1.90: {"level":"warn","ts":"2025-12-07T17:36:46.046771Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-07T17:36:45.342632Z","time spent":"704.09358ms","remote":"[::1]:59236","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":523,"response count":0,"response size":44,"request content":"compare:<target:MOD key:\"/registry/leases/flux-system/helm-controller-leader-election\" mod_revision:22967653 > success:<request_put:<key:\"/registry/leases/flux-system/helm-controller-leader-election\" value_size:455 >> failure:<request_range:<key:\"/registry/leases/flux-system/helm-controller-leader-election\" > >"}
10.13.1.90: {"level":"info","ts":"2025-12-07T17:36:46.050470Z","caller":"traceutil/trace.go:171","msg":"trace[561451455] transaction","detail":"{read_only:false; response_revision:22967677; number_of_response:1; }","duration":"702.687556ms","start":"2025-12-07T17:36:45.347772Z","end":"2025-12-07T17:36:46.050460Z","steps":["trace[561451455] 'process raft request'  (duration: 702.659355ms)"],"step_count":1}
10.13.1.90: {"level":"info","ts":"2025-12-07T17:36:46.050501Z","caller":"traceutil/trace.go:171","msg":"trace[1546289052] transaction","detail":"{read_only:false; response_revision:22967676; number_of_response:1; }","duration":"706.859734ms","start":"2025-12-07T17:36:45.343634Z","end":"2025-12-07T17:36:46.050494Z","steps":["trace[1546289052] 'process raft request'  (duration: 702.854411ms)"],"step_count":1}
10.13.1.90: {"level":"warn","ts":"2025-12-07T17:36:46.050535Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-07T17:36:45.343630Z","time spent":"706.887685ms","remote":"[::1]:59236","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":531,"response count":0,"response size":44,"request content":"compare:<target:MOD key:\"/registry/leases/flux-system/source-controller-leader-election\" mod_revision:22967654 > success:<request_put:<key:\"/registry/leases/flux-system/source-controller-leader-election\" value_size:461 >> failure:<request_range:<key:\"/registry/leases/flux-system/source-controller-leader-election\" > >"}
10.13.1.90: {"level":"warn","ts":"2025-12-07T17:36:46.050543Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-07T17:36:45.347768Z","time spent":"702.742088ms","remote":"[::1]:59236","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":542,"response count":0,"response size":44,"request content":"compare:<target:MOD key:\"/registry/leases/flux-system/kustomize-controller-leader-election\" mod_revision:22967655 > success:<request_put:<key:\"/registry/leases/flux-system/kustomize-controller-leader-election\" value_size:469 >> failure:<request_range:<key:\"/registry/leases/flux-system/kustomize-controller-leader-election\" > >"}
10.13.1.90: {"level":"info","ts":"2025-12-07T17:36:46.050620Z","caller":"traceutil/trace.go:171","msg":"trace[115993044] transaction","detail":"{read_only:false; response_revision:22967678; number_of_response:1; }","duration":"697.338232ms","start":"2025-12-07T17:36:45.353275Z","end":"2025-12-07T17:36:46.050613Z","steps":["trace[115993044] 'process raft request'  (duration: 697.171987ms)"],"step_count":1}
10.13.1.90: {"level":"warn","ts":"2025-12-07T17:36:46.050743Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-07T17:36:45.353268Z","time spent":"697.442385ms","remote":"[::1]:59236","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":555,"response count":0,"response size":44,"request content":"compare:<target:MOD key:\"/registry/leases/flux-system/notification-controller-leader-election\" mod_revision:22967656 > success:<request_put:<key:\"/registry/leases/flux-system/notification-controller-leader-election\" value_size:479 >> failure:<request_range:<key:\"/registry/leases/flux-system/notification-controller-leader-election\" > >"}
10.13.1.90: {"level":"info","ts":"2025-12-07T17:36:52.358752Z","caller":"etcdserver/corrupt.go:276","msg":"starting compact hash check","local-member-id":"db3db3b6269f352b","timeout":"7s"}
10.13.1.90: {"level":"info","ts":"2025-12-07T17:36:52.358793Z","caller":"etcdserver/corrupt.go:335","msg":"finished compaction hash check","number-of-hashes-checked":10}
10.13.1.90: {"level":"warn","ts":"2025-12-07T17:37:00.797243Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-07T17:37:00.293801Z","time spent":"503.439705ms","remote":"[::1]:59012","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
10.13.1.90: {"level":"warn","ts":"2025-12-07T17:37:00.797308Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"401.277665ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/source.toolkit.fluxcd.io/buckets/\" range_end:\"/registry/source.toolkit.fluxcd.io/buckets0\" count_only:true ","response":"range_response_count:0 size:7"}
10.13.1.90: {"level":"info","ts":"2025-12-07T17:37:00.797256Z","caller":"traceutil/trace.go:171","msg":"trace[320359367] linearizableReadLoop","detail":"{readStateIndex:24270703; appliedIndex:24270703; }","duration":"401.222683ms","start":"2025-12-07T17:37:00.396020Z","end":"2025-12-07T17:37:00.797243Z","steps":["trace[320359367] 'read index received'  (duration: 401.217473ms)","trace[320359367] 'applied index is now lower than readState.Index'  (duration: 4.28µs)"],"step_count":2}
10.13.1.90: {"level":"info","ts":"2025-12-07T17:37:00.797328Z","caller":"traceutil/trace.go:171","msg":"trace[891616972] range","detail":"{range_begin:/registry/source.toolkit.fluxcd.io/buckets/; range_end:/registry/source.toolkit.fluxcd.io/buckets0; response_count:0; response_revision:22967728; }","duration":"401.327767ms","start":"2025-12-07T17:37:00.395995Z","end":"2025-12-07T17:37:00.797323Z","steps":["trace[891616972] 'agreement among raft nodes before linearized reading'  (duration: 401.280385ms)"],"step_count":1}
10.13.1.90: {"level":"warn","ts":"2025-12-07T17:37:00.797345Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-07T17:37:00.395985Z","time spent":"401.356257ms","remote":"[::1]:33334","response type":"/etcdserverpb.KV/Range","request count":0,"request size":92,"response count":0,"response size":31,"request content":"key:\"/registry/source.toolkit.fluxcd.io/buckets/\" range_end:\"/registry/source.toolkit.fluxcd.io/buckets0\" count_only:true "}
10.13.1.90: {"level":"info","ts":"2025-12-07T17:37:00.806184Z","caller":"traceutil/trace.go:171","msg":"trace[641778311] transaction","detail":"{read_only:false; response_revision:22967729; number_of_response:1; }","duration":"326.077513ms","start":"2025-12-07T17:37:00.480100Z","end":"2025-12-07T17:37:00.806178Z","steps":["trace[641778311] 'process raft request'  (duration: 325.98825ms)"],"step_count":1}
10.13.1.90: {"level":"warn","ts":"2025-12-07T17:37:00.806229Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-07T17:37:00.480089Z","time spent":"326.114465ms","remote":"[::1]:59236","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":528,"response count":0,"response size":44,"request content":"compare:<target:MOD key:\"/registry/leases/external-secrets/external-secrets-controller\" mod_revision:22967724 > success:<request_put:<key:\"/registry/leases/external-secrets/external-secrets-controller\" value_size:459 >> failure:<request_range:<key:\"/registry/leases/external-secrets/external-secrets-controller\" > >"}
10.13.1.90: {"level":"info","ts":"2025-12-07T17:37:00.806332Z","caller":"traceutil/trace.go:171","msg":"trace[1138413128] transaction","detail":"{read_only:false; response_revision:22967730; number_of_response:1; }","duration":"238.49677ms","start":"2025-12-07T17:37:00.567825Z","end":"2025-12-07T17:37:00.806322Z","steps":["trace[1138413128] 'process raft request'  (duration: 238.333835ms)"],"step_count":1}
10.13.1.90: {"level":"info","ts":"2025-12-07T17:37:00.806431Z","caller":"traceutil/trace.go:171","msg":"trace[1460210904] transaction","detail":"{read_only:false; response_revision:22967731; number_of_response:1; }","duration":"236.21579ms","start":"2025-12-07T17:37:00.570204Z","end":"2025-12-07T17:37:00.806420Z","steps":["trace[1460210904] 'process raft request'  (duration: 236.079856ms)"],"step_count":1}
10.13.1.90: {"level":"info","ts":"2025-12-07T17:37:00.806567Z","caller":"traceutil/trace.go:171","msg":"trace[1390277896] transaction","detail":"{read_only:false; response_revision:22967732; number_of_response:1; }","duration":"216.458603ms","start":"2025-12-07T17:37:00.590103Z","end":"2025-12-07T17:37:00.806562Z","steps":["trace[1390277896] 'process raft request'  (duration: 216.293718ms)"],"step_count":1}
10.13.1.90: {"level":"info","ts":"2025-12-07T17:37:00.807615Z","caller":"traceutil/trace.go:171","msg":"trace[586629875] transaction","detail":"{read_only:false; response_revision:22967733; number_of_response:1; }","duration":"200.809762ms","start":"2025-12-07T17:37:00.606798Z","end":"2025-12-07T17:37:00.807607Z","steps":["trace[586629875] 'process raft request'  (duration: 199.74579ms)"],"step_count":1}
10.13.1.90: {"level":"warn","ts":"2025-12-07T17:37:00.807666Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.742239ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/monitoring.coreos.com/servicemonitors/\" range_end:\"/registry/monitoring.coreos.com/servicemonitors0\" count_only:true ","response":"range_response_count:0 size:9"}
10.13.1.90: {"level":"info","ts":"2025-12-07T17:37:00.807690Z","caller":"traceutil/trace.go:171","msg":"trace[488122371] range","detail":"{range_begin:/registry/monitoring.coreos.com/servicemonitors/; range_end:/registry/monitoring.coreos.com/servicemonitors0; response_count:0; response_revision:22967733; }","duration":"104.785311ms","start":"2025-12-07T17:37:00.702899Z","end":"2025-12-07T17:37:00.807684Z","steps":["trace[488122371] 'agreement among raft nodes before linearized reading'  (duration: 104.346247ms)"],"step_count":1}
10.13.1.90: {"level":"info","ts":"2025-12-07T17:37:52.359802Z","caller":"etcdserver/corrupt.go:276","msg":"starting compact hash check","local-member-id":"db3db3b6269f352b","timeout":"7s"}
10.13.1.90: {"level":"info","ts":"2025-12-07T17:37:52.359848Z","caller":"etcdserver/corrupt.go:335","msg":"finished compaction hash check","number-of-hashes-checked":10}
10.13.1.90: {"level":"info","ts":"2025-12-07T17:38:03.207295Z","caller":"traceutil/trace.go:171","msg":"trace[995300746] transaction","detail":"{read_only:false; response_revision:22967968; number_of_response:1; }","duration":"213.387011ms","start":"2025-12-07T17:38:02.993895Z","end":"2025-12-07T17:38:03.207282Z","steps":["trace[995300746] 'process raft request'  (duration: 213.321999ms)"],"step_count":1}
10.13.1.90: {"level":"info","ts":"2025-12-07T17:38:03.210863Z","caller":"traceutil/trace.go:171","msg":"trace[1997126613] transaction","detail":"{read_only:false; response_revision:22967969; number_of_response:1; }","duration":"215.934979ms","start":"2025-12-07T17:38:02.994919Z","end":"2025-12-07T17:38:03.210854Z","steps":["trace[1997126613] 'process raft request'  (duration: 215.873847ms)"],"step_count":1}
10.13.1.90: {"level":"info","ts":"2025-12-07T17:38:03.210936Z","caller":"traceutil/trace.go:171","msg":"trace[43882856] transaction","detail":"{read_only:false; response_revision:22967970; number_of_response:1; }","duration":"215.554468ms","start":"2025-12-07T17:38:02.995376Z","end":"2025-12-07T17:38:03.210930Z","steps":["trace[43882856] 'process raft request'  (duration: 215.462225ms)"],"step_count":1}
10.13.1.90: {"level":"info","ts":"2025-12-07T17:38:03.211049Z","caller":"traceutil/trace.go:171","msg":"trace[448130624] transaction","detail":"{read_only:false; response_revision:22967972; number_of_response:1; }","duration":"208.826141ms","start":"2025-12-07T17:38:03.002209Z","end":"2025-12-07T17:38:03.211035Z","steps":["trace[448130624] 'process raft request'  (duration: 208.80313ms)"],"step_count":1}
10.13.1.90: {"level":"info","ts":"2025-12-07T17:38:03.211053Z","caller":"traceutil/trace.go:171","msg":"trace[529365099] transaction","detail":"{read_only:false; response_revision:22967971; number_of_response:1; }","duration":"215.555007ms","start":"2025-12-07T17:38:02.995493Z","end":"2025-12-07T17:38:03.211048Z","steps":["trace[529365099] 'process raft request'  (duration: 215.418953ms)"],"step_count":1}
10.13.1.90: {"level":"info","ts":"2025-12-07T17:38:13.398463Z","caller":"traceutil/trace.go:171","msg":"trace[837115760] transaction","detail":"{read_only:false; response_revision:22968006; number_of_response:1; }","duration":"166.793359ms","start":"2025-12-07T17:38:13.231660Z","end":"2025-12-07T17:38:13.398454Z","steps":["trace[837115760] 'process raft request'  (duration: 166.722896ms)"],"step_count":1}
10.13.1.90: {"level":"info","ts":"2025-12-07T17:38:13.403252Z","caller":"traceutil/trace.go:171","msg":"trace[946812238] transaction","detail":"{read_only:false; response_revision:22968007; number_of_response:1; }","duration":"166.757518ms","start":"2025-12-07T17:38:13.236486Z","end":"2025-12-07T17:38:13.403244Z","steps":["trace[946812238] 'process raft request'  (duration: 166.642354ms)"],"step_count":1}
10.13.1.90: {"level":"info","ts":"2025-12-07T17:38:13.403257Z","caller":"traceutil/trace.go:171","msg":"trace[1328866566] transaction","detail":"{read_only:false; response_revision:22968011; number_of_response:1; }","duration":"137.384024ms","start":"2025-12-07T17:38:13.265862Z","end":"2025-12-07T17:38:13.403246Z","steps":["trace[1328866566] 'process raft request'  (duration: 137.358563ms)"],"step_count":1}
10.13.1.90: {"level":"info","ts":"2025-12-07T17:38:13.403295Z","caller":"traceutil/trace.go:171","msg":"trace[1656577794] transaction","detail":"{read_only:false; response_revision:22968008; number_of_response:1; }","duration":"162.6041ms","start":"2025-12-07T17:38:13.240687Z","end":"2025-12-07T17:38:13.403292Z","steps":["trace[1656577794] 'process raft request'  (duration: 162.491367ms)"],"step_count":1}
10.13.1.90: {"level":"info","ts":"2025-12-07T17:38:13.403349Z","caller":"traceutil/trace.go:171","msg":"trace[1210983007] transaction","detail":"{read_only:false; response_revision:22968010; number_of_response:1; }","duration":"161.780795ms","start":"2025-12-07T17:38:13.241563Z","end":"2025-12-07T17:38:13.403344Z","steps":["trace[1210983007] 'process raft request'  (duration: 161.64169ms)"],"step_count":1}
10.13.1.90: {"level":"info","ts":"2025-12-07T17:38:13.403275Z","caller":"traceutil/trace.go:171","msg":"trace[1482787208] transaction","detail":"{read_only:false; response_revision:22968009; number_of_response:1; }","duration":"162.185287ms","start":"2025-12-07T17:38:13.241079Z","end":"2025-12-07T17:38:13.403265Z","steps":["trace[1482787208] 'process raft request'  (duration: 162.110375ms)"],"step_count":1}

Also the output of this command :

talosctl -e 10.13.1.90 -n 10.13.1.90 service etcd

NODE     10.13.1.90
ID       etcd
STATE    Running
HEALTH   OK
EVENTS   [Running]: Health check successful (6h41m7s ago)
         [Running]: Health check failed: context deadline exceeded (6h41m41s ago)
         [Running]: Health check successful (18h34m16s ago)
         [Running]: Health check failed: context deadline exceeded (18h34m21s ago)
         [Running]: Health check successful (18h36m35s ago)
         [Running]: Health check failed: context deadline exceeded (18h36m41s ago)
         [Running]: Health check successful (18h38m53s ago)
         [Running]: Health check failed: context deadline exceeded (18h39m1s ago)
         [Running]: Health check successful (18h40m45s ago)
         [Running]: Health check failed: context deadline exceeded (18h41m1s ago)
         [Running]: Health check successful (34h50m5s ago)
         [Running]: Health check failed: context deadline exceeded (34h50m21s ago)
         [Running]: Health check successful (36h57m31s ago)
         [Running]: Health check failed: context deadline exceeded (36h57m41s ago)
         [Running]: Health check successful (42h40m12s ago)
         [Running]: Health check failed: context deadline exceeded (42h40m21s ago)
         [Running]: Health check successful (42h40m42s ago)
         [Running]: Health check failed: context deadline exceeded (42h41m21s ago)
         [Running]: Health check successful (43h40m12s ago)
         [Running]: Health check failed: context deadline exceeded (43h40m21s ago)
         [Running]: Health check successful (60h25m16s ago)
         [Running]: Health check failed: context deadline exceeded (60h25m21s ago)
         [Running]: Health check successful (61h55m16s ago)
         [Running]: Health check failed: context deadline exceeded (61h55m21s ago)
         [Running]: Health check successful (87h59m3s ago)
         [Running]: Health check failed: context deadline exceeded (88h0m1s ago)
         [Running]: Health check successful (93h8m16s ago)
         [Running]: Started task etcd (PID 2449) for container etcd (93h8m20s ago)
         [Preparing]: Creating service runner (93h8m31s ago)
         [Preparing]: Running pre state (93h8m34s ago)
         [Waiting]: Waiting for service "cri" to be "up" (93h8m34s ago)
         [Waiting]: Waiting for volume "/var/lib" to be mounted, volume "ETCD" to be mounted, service "cri" to be "up", time sync, network, etcd spec (93h8m35s ago)
         [Starting]: Starting service (93h8m35s ago)

UmanGarbag avatar Dec 07 '25 17:12 UmanGarbag

This is your problem - it's either communication between etcd nodes, or disk I/O is too slow for etcd. It has nothing to do with Talos itself.

smira avatar Dec 08 '25 08:12 smira

Okay thanks for the answer !

Etcd pod and control plane pod are on the same node so i dont think it's the network.

Do you have any command on how to troubleshooting this ?

UmanGarbag avatar Dec 08 '25 11:12 UmanGarbag

Troubleshooting depends on your environment/platform and debugging tools available with your environment.

smira avatar Dec 08 '25 11:12 smira