node-feature-discovery
node-feature-discovery copied to clipboard
Install CRDs from a subchart instead of using Helm 3 crds directory
What would you like to be added: Depart away from Helm 3 way to install CRDs and use a subchart for it.
Why is this needed:
Moving CRDs to a chart solves several issues: clean uninstall, possibility for upgrades, templating.
Motivation and Context
My main motivation is to have a better experience when managing this operator with Argo CD. That said, moving away from Helm 3 approach to install CRDs might be helpful in other scenarios too. I'm posting the full rationale below.
Helm 3 does not manage CRDs (see https://helm.sh/docs/chart_best_practices/custom_resource_definitions/). helm uninstall won't remove CRDs, and helm updgrade won't upgrade them. Manual intervention is required with the current setup.
- CRDs installed from
crdsfolder are not included into the helm release. - Additionally, it is not possible to template CRDs in Helm 3 in the
crdsfolder.
The following comes from helm best practices around CRDs:
There is no support at this time for upgrading or deleting CRDs using Helm. This was an explicit decision after much community discussion due to the danger for unintentional data loss.
ref: https://helm.sh/docs/chart_best_practices/custom_resource_definitions/#method-1-let-helm-do-it-for-you
Another alternatively, which is what I propose and comes from Helm page:
Another way to do this is to put the CRD definition in one chart, and then put any resources that use that CRD in another chart.
In this method, each chart must be installed separately. However, this workflow may be more useful for cluster operators who have admin access to a cluster
ref: https://helm.sh/docs/chart_best_practices/custom_resource_definitions/#method-2-separate-charts
I'm completely fine to open a PR with the required refactoring, without impacting current installations.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
@cmontemuino thanks for the proposal. I like the idea 👍 Helm 3 CRD management is b0rken if you ask me as an end-user.
Would you be wiling to work on this?
ping @ArangoGutierrez @yevgeny-shnaidman
@marquiz we once talked about unifying node-feature-dscovery and node-feature-discovery-operator repos. If this is something that we might start doing in a near future, that IMHO this issue might better be implemented after unification.
- https://github.com/kubernetes-sigs/node-feature-discovery/pull/1807
/milestone v0.18