kops icon indicating copy to clipboard operation
kops copied to clipboard

etcd-manager logs says that etcd node of cilium node has joined to the rest of clsuter correctly but that's not correct ! etcd is down and there is no data in the volume that is attached to the ec2

Open nuved opened this issue 4 months ago • 1 comments

/kind bug

1. What kops version are you running? The command kops version, will display this information. 1.28.4 I also tried to upgrade the cluster by the last stable version 1.29.2 but there is no difference .

2. What Kubernetes version are you running? kubectl version will print the version if a cluster is running or provide the Kubernetes version specified as a kops flag. client of kubectl v1.31.0 the k8s server 1.27.16 etcd version is 3.5.9 3. What cloud provider are you using? AWS 4. What commands did you run? What is the simplest way to reproduce this issue? kops --name=mycluster --state s3://my-cluster-sample rolling-update cluster --instance-group=master-1b --yes 5. What happened after the commands executed?

kubectl get pod -n kube-system  | grep cilium
cilium-2vh5m                                    1/1     Running   49 (6d23h ago)   7d18h
cilium-4w865                                    1/1     Running   4 (6d23h ago)    6d23h
cilium-8ccpq                                    1/1     Running   0                28m
cilium-c78fl                                    1/1     Running   56 (38h ago)     21d
cilium-fbdhl                                    1/1     Running   0                6d7h
cilium-gwkxp                                    1/1     Running   22 (6d23h ago)   7d1h
cilium-lv2nd                                    1/1     Running   0                6d23h
cilium-operator-7575d5dccc-cwsqf                1/1     Running   2 (4m52s ago)    33m
cilium-operator-7575d5dccc-kgnm6                1/1     Running   2 (5m28s ago)    37m
cilium-pqprl                                    1/1     Running   49 (6d23h ago)   20d
cilium-rntt6                                    1/1     Running   7 (6d23h ago)    6d23h
cilium-rvbs4                                    1/1     Running   14 (6d23h ago)   7d
etcd-manager-cilium-i-01f09427e9d4fcd64         1/1     Running   0                5d23h
etcd-manager-cilium-i-026c2be03509de051         1/1     Running   0                27m
etcd-manager-cilium-i-0b84a6dd15b799c58         1/1     Running   0                6d2h

Seems the cluster state are healthy ! all etcd-manager ( and the new one etcd-manager-cilium-i-026c2be03509de051 ) are healthy .

the new etc-manager node reports ( logs of etcd-manager-cilium-i-026c2be03509de051 ) that etcd has joined to the rest of cluster but it's not true . actually etcd server could not listen up to any ports ( 4003,2382,8083 ports are down ) , just etcd-manager itself is listening to this port 3991! the volume that has attached to the machine and shared to the pod is empty as well.

Rest of cluster hopefully report that they could not connect to the new etcd because it's not up and running.

LISTEN 0      32768    10.141.18.9:3997       0.0.0.0:*    users:(("etcd-manager",pid=5691,fd=8))
LISTEN 0      32768    10.141.18.9:3996       0.0.0.0:*    users:(("etcd-manager",pid=5747,fd=8))
LISTEN 0      32768    10.141.18.9:3991       0.0.0.0:*    users:(("etcd-manager",pid=5644,fd=8))
LISTEN 0      32768              *:2381             *:*    users:(("etcd",pid=5789,fd=7))
LISTEN 0      32768              *:2380             *:*    users:(("etcd",pid=5811,fd=7))
LISTEN 0      32768              *:4001             *:*    users:(("etcd",pid=5811,fd=8))
LISTEN 0      32768              *:4002             *:*    users:(("etcd",pid=5789,fd=8))
LISTEN 0      32768              *:8081             *:*    users:(("etcd",pid=5811,fd=19))
LISTEN 0      32768              *:8082             *:*    users:(("etcd",pid=5789,fd=22))

6. What did you expect to happen? I would expect if I can fix the issue by forcing this node to join to the rest of cluster by setting ETCD_INITIAL_CLUSTER_STATE=existing as env in our kops cluster configuration for this etcd node ! after re-create that node , I would have a healthy etcd cluster but it does not work ! it lies and seems everything is ok but it's not true.

nuved avatar Oct 02 '24 17:10 nuved