Ciprian Hacman
Ciprian Hacman
> I have always been wary of the ClusterAutoscaler due to its complexity, so I have never used it. For example from https://kops.sigs.k8s.io/addons/#cluster-autoscaler I get the idea that I have...
> Can you please confirm, do I understand correctly then that the installation of ClusterAutoscaler in its current state will indeed break the automation of kops upgrade cluster? Or was...
The problem is not the `spec.image`. That will be kept. You will not be able to change `--scale-down-unneeded-time`. PS: Just try, it's easy to test your assumptions on a test...
What you want will mostly work on AWS, not on most other supported cloud providers. kOps uses cluster autoscaler in general, which already has the feature, so most likely there...
All you have to do is to enable it. If you want a newer image, kOps will not overwrite it on update/upgrade. ```yaml clusterAutoscaler: enabled: true ```
Could be combined with other timers. I would suggest to wait 1 hour and see. Also, check cluster-autoscaler logs to see what it things of the node. ```go DefaultScaleDownUnreadyTime =...
See: https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-does-ca-deal-with-unready-nodes
Hi @ScOut3R. Are there more detailed logs in protokube? Would help to understand what the actual apply error is. Thanks!
> Could it be related to [#121437](https://github.com/kubernetes/k8s.io/issues/6010)? I've been facing issues with kubernetes 1.27.7 because kubectl binary seems to be unavailable from at least one CDN location. Very unlikely.
> `DaemonSet.apps "calico-node" is invalid: spec.template.spec.initContainers[3].image: Required value` @ScOut3R Could you check the calico manifest to see what is the Init Container that is generating this error?