permit specifying trident-operator resources
closes https://github.com/NetApp/trident/issues/927
Change description
Permit specifying the resources of the trident-operator deployment
Did you add unit tests? Why not?
No, minor helm chart modification
Does this code need functional testing?
No, this just moves a configuration to the .Values
Is a code review walkthrough needed? why or why not?
No
Does this code need a note in the changelog?
trident operator helm chart: permit specifying trident-operator resources
Additional Information
@vivintw why did you add your commit ? it was breaking the PR.
can you consider merging this ? would have been nice to see this happening for the Oct. release, although now I doubt it will be possible
We get alerts because of the low hardcoded cpu limits since the version 100.2406.1. Merging this would be highly appreciated.
We're also getting alerts and so really keen to see this merged.
We currently have an issue here, with too low limits and can't override with the chart.
@clintonk Can you have a look on this PR please ?
Hi, @remyj38. We understand the importance of this, and it is planned. A few questions, please...
- This is needed for all containers, including the node Daemonset, CSI sidecars, autosupport, init containers, etc., right?
- While all values would be configurable, our plan would be to set default requests, but not limits, since someone will always need more than any limit we set. Any concerns with that approach?
- Setting
priorityClassNameon the Deployment, as we already do on the Daemonset, is valuable, right? - How did you arrive at the values in this PR? We would expect to measure them and add a bit of padding.
Hi @clintonk
- This is needed for all containers, including the node Daemonset, CSI sidecars, autosupport, init containers, etc., right?
=> In my case, I only need to change the operator operator and not the other pods deployed by the operator. But @clementnuss opened an other PR to variabilise the resources of managed objects : #930 - While all values would be configurable, our plan would be to set default requests, but not limits, since someone will always need more than any limit we set. Any concerns with that approach?
=> Removing limits can have harmful effect. It's better to define default values an allow users to override it if needed. - Setting priorityClassName on the Deployment, as we already do on the Daemonset, is valuable, right?
=> It's a good idea, but as for the resources, it's better to provides a default value and let the users choose if they want to keep default values or override - How did you arrive at the values in this PR? We would expect to measure them and add a bit of padding. => It's the same than before, but into a variable
That's helpful, @remyj38, thank you.
Hi @clintonk Do you have any news about the availability of this feature ?
Hi @clintonk, this feature would also be helpful for my organization as well, we're also running into our trident operator pods being CPU throttled due to the default limit being set too low. As this merge request is completely backwards compatible, but allows us to override limits as required it would be great to get it merged in.
@anguswilliams Understood. We're working this at high priority for 25.10.