pega-helm-charts
pega-helm-charts copied to clipboard
Add ability to configure resources (cpu and memory) for init containers
Is your feature request related to a problem? Please describe.
The number of deployed init containers for jobs and deployments varies based on the charts execution mode. However, a recurring issue arises due to the absence of resource requests and limits assigned to these init containers. This issue is exacerbated by our OpenShift Container Platform (OCP) cluster's SecurityContextConstraint, which mandates the presence of resource requests and limits for deployed containers. Consequently, the pod creation process is hindered, as the pod definitions within Jobs or Deployments include init containers that lack the necessary resource quota specifications.
Init-Containers without resource quotas I am aware of Pega Chart Jobs:
-
pega-post-upgrade
:wait-for-pegaupgrade
,wait-for-rolling-updates
-
pega-zdt-upgrade
:wait-for-pre-dbupgrade
Deployment:
-
pega-batch
:wait-for-pegainstall
-
pega-web
:wait-for-pegainstall
Backingservices Chart Deployment
-
pega-srs
:wait-for-internal-es-cluster
Describe the solution you'd like
Any container defined in pods created by the Helm chart must support resource quota configuration via the helm chart. Not every container needs separate configuration. For instance, the 'k8s-wait-for' containers can likely share the same resource quotas defined once in the 'values.yaml'.
Describe alternatives you've considered
Use Kustomize post processing to manually patch resources with required resource quotas.
I am open for contribution but I first wanted to get a discussion started and agree on a solution to implement
Hi @micgoe ,
We have merged a PR that takes care of assigning cpu and memory to init containers in order to support namespace with resource quota limits.
https://github.com/pegasystems/pega-helm-charts/pull/622
For Backing service chart , I am tagging @reddy-srinivas to make the similar change for wait-for-internal-es-cluster to have cpu and memory limit.
Can we make them configurable via variables in helm-chart instead of hard-coding?
The wait-for containers are lightweight and require minimal CPU and memory resources. Customizations are not permitted on the init containers to avoid additional workload for clients. Creating variables for them may not provide significant value and could lead to unnecessary configuration in the value.yaml file. Do you have specific use case to configure them very different from the defaults?