k8s-spot-termination-handler
k8s-spot-termination-handler copied to clipboard
Monitors AWS for spot termination notices when run on spot instances and shuts down gracefully
Corrected K8s Spot Rescheduler description in the Readme.md
Hello all, It probably hasn’t gone unnoticed that this repository hasn’t seen much activity in the last 6+ months, which is due to the fact that we simply don’t have...
This becomes essential when instances are exposed with ELB: after getting the termination signal, the instance is drained and soon will be terminated. Still, `kube-proxy` keeps running on the instance,...
When draining a node from k8s, the first step is to patch the node spec and set `node.Spec.Unschedulable=true`. The current cluster role example doesn't include it, and it throws errors...
Sleep for 120s after drain to prevent pod from restarting and issuing another drain before controller terminates the pod..
Running manually command to taint node from k8s-spot-termination pod will produce this error: User "system:serviceaccount:kube-system:k8s-spot-termination-handler" cannot patch resource "nodes" in API group "" at the cluster scope. No actual actions...
I'm contributing a Helm chart to https://github.com/kubernetes/charts
Bumps [requests](https://github.com/psf/requests) from 2.21.0 to 2.31.0. Release notes Sourced from requests's releases. v2.31.0 2.31.0 (2023-05-22) Security Versions of Requests between v2.3.0 and v2.30.0 are vulnerable to potential forwarding of Proxy-Authorization...
Bumps [requests](https://github.com/psf/requests) from 2.21.0 to 2.32.0. Release notes Sourced from requests's releases. v2.32.0 2.32.0 (2024-05-20) 🐍 PYCON US 2024 EDITION 🐍 Security Fixed an issue where setting verify=False on the...