cert-manager
cert-manager copied to clipboard
Make it possible to split a cert-manager installation over multiple Helm releases.
Good day
This feature is required for fine-tuning the environment in our clusters, specifically, we want to be able to disable all resources within the scope of a release.
custom-values.yaml
# Disabled all by default
global:
podSecurityPolicy:
enabled: false
useAppArmor: false
rbac:
create: false
certManager:
enabled: false
serviceAccount:
create: false
automountServiceAccountToken: false
securityContext:
runAsNonRoot: false
prometheus:
enabled: false
servicemonitor:
enabled: false
honorLabels: false
webhook:
enabled: false
securityContext:
runAsNonRoot: false
serviceAccount:
create: false
automountServiceAccountToken: false
hostNetwork: false
cainjector:
enabled: false
securityContext:
runAsNonRoot: false
serviceAccount:
create: false
automountServiceAccountToken: false
startupapicheck:
enabled: false
securityContext:
runAsNonRoot: false
serviceAccount:
create: false
automountServiceAccountToken: false
installCRDs: false
helm template . -f custom-values.yaml
In our company, there are requirements for charts to meet certain criteria, such as the ability to completely disable all components in the chart, so that detailed manipulation can be performed later.
For example, to split into two separate releases, one with a separate controller, one with a separate webhook, and one with a separate CRD.
To perform such manipulation, we need to be able to disable each resource.
Example:
[root@control1 bash]# helm list
NAME NAMESPACE REVISION STATUS CHART APP VERSION
cert-manager.controller pfm-certmanager 1 deployed cert-manager-v1.7.0-v2 v1.7.0
cert-manager.webhook pfm-certmanager 2 deployed cert-manager-v1.7.0-v2 v1.7.0
cert-manager.crd pfm-certmanager 3 deployed cert-manager-v1.7.0-v2 v1.7.0
cert-manager.monitoring pfm-certmanager 1 deployed cert-manager-v1.7.0-v2 v1.7.0
Adding the "do-not-merge/release-note-label-needed" label because no release-note block was detected, please follow our release note process to remove it.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: Once this PR has been reviewed and has the lgtm label, please assign inteon for approval. For more information see the Kubernetes Code Review Process.
The full list of commands accepted by this bot can be found here.
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
Hi @FR-Solution. Thanks for your PR.
I'm waiting for a cert-manager member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.
Once the patch is verified, the new status will be reflected by the ok-to-test label.
I understand the commands that are listed here.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
due to a large number of conflicts, the PR had to be reset; the discussion was held in this PR https://github.com/cert-manager/cert-manager/pull/5823
@munnerz
@munnerz
@munnerz
/kind feature
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
/lifecycle rotten
/remove-lifecycle stale
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
/close
@cert-manager-bot: Closed this PR.
In response to this:
Rotten issues close after 30d of inactivity. Reopen the issue with
/reopen. Mark the issue as fresh with/remove-lifecycle rotten. /close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.