joshulyne
joshulyne
Ahh that makes sense. It is possible, but I do believe it will require an additional kube api call to get the extra labels information from the khcheck pod when...
unfortunately, daemonset pods automatically get added the following tolerations: https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/#taints-and-tolerations node.kubernetes.io/unschedulable | NoSchedule | 1.12+ | DaemonSet pods tolerate unschedulable attributes by default scheduler. this is the taint added to...
@segeva making sure i understand your question -- you've enabled pod-restarts and pod-status khchecks and have tried testing it using the `badPodRestartTestSpec.yaml` and no errors were being reported by the...
@segeva you need to `apply` the yaml again with a different namespace!
@segeva So what we've done is instead of having the `POD_NAMESPACE` pointed to another NS (sandbox), we actually apply the khcheck in the other NS and keep: ``` - name:...
@segeva For the large number of pods created by Kuberhealthy -- there is a check-reaper cronjob that should clean up old running checks and there should be at most 5...
We merged in #768! As to address the other issue, you're right in that we should have all env vars configurable in helm. I think what we want to do...
@adriananeci sorry for the late response! The race condition that you see in your logs is expected, therefore the retry. If your check is stuck in that loop, that's an...
@adriananeci thanks for the quick response! So among the kuberhealthy instances, only one of them can modify a khstate resource at a time, this is generally the master kuberhealthy --...
So I tried to replicate the scenario -- creating a bad khcheck that fails to report back to kuberhealthy within its timeout and setting the runInterval low so that check...