No limits or requests set on Trident resources
Describe the bug No limits or requests are set for any Trident pods except the operator pod. This is bad practice, as an issue in Trident could take down the cluster.
Environment Provide accurate information about the environment to help us reproduce the issue.
- Trident version: 23.07
- Trident installation flags used: none
- Container runtime: containerd 1.6.6-3.1
- Kubernetes version: 1.25.9
- Kubernetes orchestrator: kubeadm
- Kubernetes enabled feature gates: na
- OS: RHEL8
- NetApp backend types: ONTAP AFF
- Other:
To Reproduce Install trident with the helm chart. Query the daemonset named "daemonset.apps/trident-node-linux" and all pods it starts. No sign of limits or requests.
Expected behavior The daemonset should set some sensible cpu, mem, and ephemeral storage limits and requests.
Additional context Add any other context about the problem here.
We bumped into this issue as well, as we have a number of cluster policies (i.e. kyverno) for enforcing various standards around how resources and limits are set across the cluster.
It would be great if the helm chart let us specify (override) the resource block for each pod deployed (i.e. app, daemonset, and all sidecars). This flexibility is a fairly common paradigm across the k8s helm ecosystem, so we were surprised to not find it here.
Hi @cjreyn Please let us know if this issue still exists with the newer versions of Trident. If this has been resolved, please close the issue.
Thanks for addressing! I'll wait for it to trickle into a tagged release unless I find time to roll out the fixed branch on our test cluster.
I want to emphasize how important this is. Without resource requests, containers may be scheduled but not get a single second of CPU time to execute. Or get OOMKilled randomly. Or experience a number of hard-to-predict behaviors.
Any update or progress on this issue?
This issue is addressed in Trident 25.10.