csi-driver-host-path
csi-driver-host-path copied to clipboard
define resource limits to avoid eviction
Pods without resource specification are the first that get evicted when a node runs out of resources. All of our deployments should specify required resources.
Perhaps there's also something else that can be done to prevent removal of a CSI driver instance from a node?
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
/remove-lifecycle rotten
In addition to resource requests, we should recommend a pod priority
As mentioned on the mailing list. An important DaemonSet should also have a blanket toleration.
DaemonSets that are system-critical are recommended to include a blanket-toleration. Such a pod will never be evicted by a taint.
For example,
$ k get ds -n kube-system kube-proxy -oyaml
apiVersion: extensions/v1beta1
kind: DaemonSet
#snip
spec:
#snip
tolerations:
- effect: NoExecute
operator: Exists
- effect: NoSchedule
operator: Exists
/help
@msau42: This request has been marked as needing help from a contributor.
Please ensure the request meets the requirements listed here.
If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help
command.
In response to this:
/help
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
if we deploy it under kube-system
, we may use system-node-critical
priority class directly.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
@pohly can we have generic resources limits of memory and cpu define for each yaml? or should i implement all resource limit of memory and cpu as default in yaml define in docs
What do you mean with "generic resource limits"?
I don't know what the recommended way of determining resource limits is. It probably implies running the pods and then measuring, but I don't know how or what.
@pohly here i would like to know that is that any criteria for defining resources for each pod, example for csi-hostpath-attacher pod , what would be the resource limit for cpu and memory.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale /lifecycle frozen