Philipp Hellmich
Philipp Hellmich
This race conditions would only occur if a given pod is the only reason why the cluster scaled up in the first place. My observation was that most times a...
There won’t be other new workloads because once karpenter decides to remove it the node is set to no schedule anyway.
Yes I would like to ignore pods which have containers wich will never recover from a terminal state but I would also like to simple ignore pods which are in...
> > Yes I would like to ignore pods which have containers wich will never recover from a terminal state but I would also like to simple ignore pods which...
We have alerting in place, but if you have a lot of teams using the cluster it is a pain to manually take care of these issues.
/reopen I do not think that this is solved
hmm how did the old chart work in that case? https://github.com/k8s-at-home/charts/blob/master/charts/stable/home-assistant/templates/servicemonitor.yaml
update this line: auth_result **** did not work for me: I still get: ```` Traceback (most recent call last): File "/usr/local/bin/gimme-aws-creds", line 17, in GimmeAWSCreds().run() File "/usr/local/Cellar/gimme-aws-creds/2.4.1/libexec/lib/python3.9/site-packages/gimme_aws_creds/main.py", line 469, in...