[bitnami/redis-cluster] Add support for defining resources on pod level
Name and Version
bitnami/redis-cluster
What is the problem this feature will solve?
This problem really touches multiple charts, but let's tackle it using redis-cluster as an example.
Currently when defining resources you have to do it per container e.g. you have to set CPU requests and limits separately for redis container and separately for metrics one.
In practice metrics usually uses much smaller resources than assigned to the "main" container and you have to fine-tune the resources so they are big enough not to throttle and small enough not to waste resources.
What is the feature you are proposing to solve the problem?
KEP-2837 proposes a solution that would allow resources to be defined once, on the pod level, to be shared between all containers running in the pod.
It was introduced to Kubernetes as alpha feature in 1.32 and must be turned on via feature gate.
I'd like to propose adding support for defining resources in values.yaml that would be defined on the pod level.
With this feature in place defined resources would be shared between all containers in a pod, so no waste and unnecessary throttling occurs.
What alternatives have you considered?
No response
Hi!
Thank you so much for reporting. I will forward it to the team but as it is not a critical feature we cannot guarantee an ETA. If you want to speed up the process, you can submit a PR adding support for metrics.resources
This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.
Due to the lack of activity in the last 5 days since it was marked as "stale", we proceed to close this Issue. Do not hesitate to reopen it later if necessary.
Still valid.
This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.
Due to the lack of activity in the last 5 days since it was marked as "stale", we proceed to close this Issue. Do not hesitate to reopen it later if necessary.