eks-rolling-update icon indicating copy to clipboard operation
eks-rolling-update copied to clipboard

EKS Rolling Update is a utility for updating the launch configuration of worker nodes in an EKS cluster.

Results 27 eks-rolling-update issues
Sort by recently updated
recently updated
newest added

When cluster-autoscaler is disabled during the upgrade process this causes new pods being scheduled onto the cluster to be stuck in Pending until the script finishes running and cluster autoscaler...

When running this script I get the following warning: ``` Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data. ``` Updating the command now...

eks-rolling-update 1.20.2 is going into crashloopfailure and is not stable and also shuts down cluster-autoscaler. This is on EKS 1.20 version cluster. Logs below 2022-03-06 07:48:03,281 INFO Exiting since ASG...

Hi, As per [comment](https://github.com/hellofresh/eks-rolling-update/pull/117#issuecomment-1040306424) from @js-timbirkett, the filtering of nodes via `EXCLUDE_NODE_LABEL_KEYS` doesn't seem to work. As a suggestion, I've modified `get_k8s_nodes()` to return `nodes, excluded_nodes `, then modified the...

Hi, Nice work folks! There looks to be some interesting fixes that have come in this year, which I'd love to have available via pypi Any plans for a release?...

https://github.com/hellofresh/eks-rolling-update/blob/70306efbf8d9a6c587d8f82af60d995827122537/eksrollup/lib/aws.py#L112-L126 Causes a failure to run when you have more running instances than desired: ``` 00:04:19.030 2022-01-25 16:24:43,568 INFO Checking asg golf-dev-mgmt-worker-node-0-20191108085402700900000002 instance count... 00:04:19.030 2022-01-25 16:24:43,701 INFO Asg golf-dev-mgmt-worker-node-0-20191108085402700900000002...

This was on an old version of boto3 which was not compatible with python 3.10 due to removal of deprecated `collections.abc` aliases, updating boto3 in requirements.txt fixed this issue >...

We have a big and busy EKS cluster with nodes joining and leaving many times in a day (spot instances failing or being replaced). We try to update each ASG...

Any specific reason for not using API to drain the node ? I have a code that i use reliably to drain the node using API. Is there any way...