cluster-api icon indicating copy to clipboard operation
cluster-api copied to clipboard

Helm Chart for Cluster API GitOps Installation / Operation

Open tuunit opened this issue 1 year ago • 2 comments

What would you like to be added (User Story)?

As an operator of cluster API, I want to be able to install and manage Cluster API using a GitOps approach for easier deployment and management of large Kubernetes cluster environments.

Detailed Description

A Helm chart for Cluster API would provide a standardized and convenient way to deploy and manage Cluster API components. This would be particularly beneficial for GitOps workflows, where infrastructure is defined and managed declaratively as code.

Basic idea:

  • A generic Helm chart for the Cluster-API CRDs
  • Installation and upgrade of the cluster operator and webhook controller
  • On top of that, cluster-api providers could provide their own helm charts with the specific controller implementations and CRDs
  • This would allow for integration with popular GitOps tools like ArgoCD
  • Pre-configured values for common Cluster API configurations?

Anything else you would like to add?

As mentioned in issue #2811, there has been previous interest in this feature. I would be happy to contribute a PR for the Helm chart if it is deemed desirable.

Label(s) to be applied

/kind feature One or more /area label. See https://github.com/kubernetes-sigs/cluster-api/labels?q=area for the list of labels.

tuunit avatar Oct 21 '24 11:10 tuunit

This issue is currently awaiting triage.

If CAPI contributors determine this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

k8s-ci-robot avatar Oct 21 '24 11:10 k8s-ci-robot

I think this is was https://github.com/kubernetes-sigs/cluster-api-operator tries to accomplish.

chrischdi avatar Oct 22 '24 07:10 chrischdi

this is an old and recurring discussion. It is not that we don't think helm chart could be useful, but we don't want to host helm chart if we are not sure we can guarantee their quality over time.

This requires a group of persons committed to implement all the E2E tests to validate those charts, ensure they remain operational, and they work when upgrades.

Also if we have such a team, we should probably discuss if this overlaps with the goals of the cluster-api-operator project, and if this repo is the right place to do this or having a separated repo might be a better solution e.g. because it decouples the release cycles, it allows to group helm chart for CAPI and CAPI providers etc (and many other projects are following the same approach).

fabriziopandini avatar Oct 30 '24 13:10 fabriziopandini

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jan 28 '25 14:01 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Feb 27 '25 14:02 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Mar 29 '25 15:03 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

k8s-ci-robot avatar Mar 29 '25 15:03 k8s-ci-robot