kops icon indicating copy to clipboard operation
kops copied to clipboard

A watchdog to restart nodes in NotReady state

Open victor-sudakov opened this issue 2 years ago • 54 comments

/kind feature

1. Describe IN DETAIL the feature/behavior/change you would like to see.

There are cases when a node is NotReady from the point of view of Kubernets/kOps, but is healthy from the point of view of the corresponding AWS autoscaling group. The easiest way to reproduce this situation is to stop the kubelet service on a node. A node will stay in the NotReady state forever after that. What is worse, pods using PVCs will never be rescheduled from such a node to other nodes because EBS volumes are still attached to the NotReady node and cannot be detached until it is stopped or terminated.

Can we add some watchdog addon which would signal aws autoscaling set-instance-health --health-status Unhealthy or something similar to AWS when a node has been NotReady for a certain configured time?

This would allow clusters to heal themselves.

victor-sudakov avatar Jul 08 '23 16:07 victor-sudakov

ClusterAutoscaler might be a good fit for this use case. They recently added a new flag --scale-down-unready-enabled. https://github.com/kubernetes/autoscaler/pull/5537

hakman avatar Jul 08 '23 18:07 hakman

I have always been wary of the ClusterAutoscaler due to its complexity, so I have never used it. For example from https://kops.sigs.k8s.io/addons/#cluster-autoscaler I get the idea that I have to specify the latest supported image of ClusterAutocaler for the specified kubernetes version - does it mean that I will have to change this manually in the manifest on each kops upgrade cluster run?

In short, I think it's an overkill for my rather simple tasks and will add a lot of admin overhead. Do you think there is an alternative simpler solution?

victor-sudakov avatar Jul 09 '23 06:07 victor-sudakov

I have always been wary of the ClusterAutoscaler due to its complexity, so I have never used it. For example from https://kops.sigs.k8s.io/addons/#cluster-autoscaler I get the idea that I have to specify the latest supported image of ClusterAutocaler for the specified kubernetes version - does it mean that I will have to change this manually in the manifest on each kops upgrade cluster run?

There are 3 options here:

  1. You wait for a contributor to add these args for cluster-autoscaler to kOps
  2. You make a PR to add these args to kOps
  3. You deploy and manage cluster-autoscaler yourself with these args

In short, I think it's an overkill for my rather simple tasks and will add a lot of admin overhead. Do you think there is an alternative simpler solution?

Check if https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#defaulttolerationseconds helps. Once nodes are empty, cluster-autoscaler should terminate them.

hakman avatar Jul 09 '23 06:07 hakman

There are 3 options here: You wait for a contributor to add these args for cluster-autoscaler to kOps

Can you please confirm, do I understand correctly then that the installation of ClusterAutoscaler in its current state will indeed break the automation of kops upgrade cluster? Or was it my erroneous interpretation of the docs?

Once nodes are empty, cluster-autoscaler should terminate them.

The affected nodes will never be empty because their pods using PVCs will never be rescheduled from such nodes to other ones because EBS volumes are still attached to the NotReady node and cannot be detached until it is stopped or terminated. You need a node to be really terminated to free and reattach the EBS volume which backs the PV.

Snippet from the log of such a situation:

Warning  FailedAttachVolume  3m9s  attachdetach-controller  Multi-Attach error for volume "pvc-bb834416-d65f-4d85-b7bf-8f64e2e62786" Volume is already exclusively attached to one node and can't be attached to another
Warning  FailedMount         66s   kubelet                  Unable to attach or mount volumes: unmounted volumes=[db-rm], unattached volumes=[db-rm kube-api-access-ss8w5]: timed out waiting for the condition

victor-sudakov avatar Jul 09 '23 07:07 victor-sudakov

Can you please confirm, do I understand correctly then that the installation of ClusterAutoscaler in its current state will indeed break the automation of kops upgrade cluster? Or was it my erroneous interpretation of the docs?

You cannot use kOps to manage an addon (like cluster-autoscaler) and modify the manifests later. kOps will eventually notice and remove the changes.

hakman avatar Jul 09 '23 07:07 hakman

You cannot use kOps to manage an addon (like cluster-autoscaler) and modify the manifests later. kOps will eventually notice and remove the changes.

Sorry, I don't quite understand you. So, when I add the autoscaler addon, I specify a particular version in spec.image, correct? Then, when I have to upgrade the cluster via kops upgrade cluster, what should I do with the addon and with the value of spec.image? Update it manually, and at what moment?

victor-sudakov avatar Jul 09 '23 07:07 victor-sudakov

The problem is not the spec.image. That will be kept. You will not be able to change --scale-down-unneeded-time.

PS: Just try, it's easy to test your assumptions on a test cluster.

hakman avatar Jul 09 '23 07:07 hakman

I guess I don't need it kept, I need it updated automatically to the latest version the cluster is running.

Of course, I'll experiment with the addon, but I still think it's an overkill for the simple task of restarting NotReady nodes just to free EBS volumes. The documentation itself for the addon is overwhelming.

victor-sudakov avatar Jul 09 '23 07:07 victor-sudakov

Want something like this just for kOps: https://docs.digitalocean.com/developer-center/automatic-node-repair-on-digitalocean-kubernetes/

victor-sudakov avatar Jul 10 '23 12:07 victor-sudakov

What you want will mostly work on AWS, not on most other supported cloud providers. kOps uses cluster autoscaler in general, which already has the feature, so most likely there will be no such feature added. The --scale-down-unready-enabled is just added to disable the feature you want. The functionality was there for some time (probably 4+ years).

Add flag '--scale-down-unready-enabled' to enable or disable scale-down of unready nodes. Default value set to true for backwards compatibility (i.e., allow scale-down of unready nodes). There are cases where a user may not want the unready nodes to be removed from a cluster. As an example, but not limited to, this might be useful in case a node is unreachable for a period of time and local data live there, the node shall remain in the cluster, and possibly an admin may want to take any actions for recovering it.

hakman avatar Jul 10 '23 15:07 hakman

I have not received a definite answer, can you please tell me: does the autoscaler bring additional administrative overhead after installation into the cluster?

victor-sudakov avatar Jul 11 '23 08:07 victor-sudakov

All you have to do is to enable it. If you want a newer image, kOps will not overwrite it on update/upgrade.

  clusterAutoscaler:
    enabled: true

hakman avatar Jul 11 '23 09:07 hakman

Sorry, does not solve the problem. I installed the autoscaler and metrics server:

clusterAutoscaler:
  enabled: true
metricsServer:
  enabled: true
  insecure: true

did a rolling update of the cluster, then SSH-ed into a node and stopped the kubelet. The node became NotReady in the "kubectl get nodes" output and has been in this state for 28 minutes already. No attempt has been made to restart or terminate it.

Probably to use the autoscaler as a NotReady watchdog, some more non-default configuration is needed?

victor-sudakov avatar Jul 18 '23 08:07 victor-sudakov

Could be combined with other timers. I would suggest to wait 1 hour and see. Also, check cluster-autoscaler logs to see what it things of the node.

DefaultScaleDownUnreadyTime = 20 * time.Minute

hakman avatar Jul 18 '23 08:07 hakman

An hour is too much for cluster recovery, but I will wait and report.

victor-sudakov avatar Jul 18 '23 08:07 victor-sudakov

PS an hour is too much because a node in a NotReady state does not release its EBS volumes and the statefulSets with PVCs stop working for the whole time a node is not ready. In this context, even 5 minutes is too much.

victor-sudakov avatar Jul 18 '23 08:07 victor-sudakov

I think you are perhaps looking for the wrong solution. When nodes become not-ready, pods should be evicted. The time before eviction is also configurable. See https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/#taint-based-evictions

PVCs certainly follow the pod regardless of the instance state. EBS controller will take care to unattach and reattach EBS volumes as needed.

olemarkus avatar Jul 18 '23 08:07 olemarkus

When nodes become not-ready, pods should be evicted.

That is a different story. Even if all the pods are successfully evicted from the node, it is not good to keep the NotReady node forever. There should be some self-repair mechanism in the cluster, like the one used in the Digital Ocean at the link 9 messages above.

victor-sudakov avatar Jul 18 '23 09:07 victor-sudakov

If the node has been (mostly) evicted, i.e its utilisation is below the configured CAS threshold, then CAS will terminate it. There is a huge difference between failing workloads and failing instances. Besides, you also want the not-ready node to stick around so you can determine the cause of failure.

olemarkus avatar Jul 18 '23 09:07 olemarkus

I would suggest to wait 1 hour and see.

An hour has passed and still ... nada. The log of one of the three autoscaler pods: https://termbin.com/z2vu

victor-sudakov avatar Jul 18 '23 09:07 victor-sudakov

Besides, you also want the not-ready node to stick around so you can determine the cause of failure.

Maybe I do if a replacement node has been started. But not the way it works now when the failed node is just marked NotReady and no replacement is started.

victor-sudakov avatar Jul 18 '23 09:07 victor-sudakov

The logs say that it shouldn't scale down because you are at minimum capacity anyway. And there is no need for additional capacity in the cluster. This is not a very likely scenario in a cluster that has actual workloads.

olemarkus avatar Jul 18 '23 09:07 olemarkus

And no eviction happens either. The statefulSet-managed pods are just stuck in the "Terminating" status on the NotReady node. I can see the NotReady node has the node.kubernetes.io/unreachable:NoExecute and node.kubernetes.io/unreachable:NoSchedule taints but the pod is there and has not been rescheduled anywhere.

You know what? Could you reproduce this for me?

  1. Create a kOps cluster on AWS.
  2. Create a statefulSet with one test pod and a PVC (make sure you configure a volumeClaimTemplate in the statefulSet and the pod has a volume attached).
  3. Disable kubelet on the node where your test pod is running (e.g. via SSH to the node). This we emulate a faulty node.
  4. Watch the node in the NotReady state forever and the test pod in the "Terminating" status forever, never rescheduled.

victor-sudakov avatar Jul 18 '23 09:07 victor-sudakov

The logs say that it shouldn't scale down because you are at minimum capacity anyway. And there is no need for additional capacity in the cluster. This is not a very likely scenario in a cluster that has actual workloads.

This is because I really don't need autoscaling, I just need faulty node replacement. And you are right, this one is an experimental cluster.

victor-sudakov avatar Jul 18 '23 09:07 victor-sudakov

Sorry I am not able to reproduce this now. If the pod hangs on termination, there is a finaliser blocking it, most likely.

olemarkus avatar Jul 18 '23 09:07 olemarkus

See: https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-does-ca-deal-with-unready-nodes

hakman avatar Jul 18 '23 09:07 hakman

See: https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-does-ca-deal-with-unready-nodes

Once there are more unready nodes in the cluster, CA stops all operations until the situation improves.

Totally does not sound like it would restart or recreate any nodes. Looks like the autoscaler just stops working if there are more unready nodes than a certain threshold.

victor-sudakov avatar Jul 18 '23 10:07 victor-sudakov

Sorry I am not able to reproduce this now. If the pod hangs on termination, there is a finaliser blocking it, most likely.

Whatever is the problem with eviction it is not for this issue. I'll debug it and may open another issue or find the reason. Let's stick to kOps' self healing.

victor-sudakov avatar Jul 18 '23 10:07 victor-sudakov

Sure. Happy to review a PR.

olemarkus avatar Jul 18 '23 11:07 olemarkus

The question with automatically restarting NotReady nodes is still open. The autoscaler (at least with default settings) does not seem suitable for this purpose.

victor-sudakov avatar Jul 19 '23 04:07 victor-sudakov