cluster-api-provider-gcp
cluster-api-provider-gcp copied to clipboard
Support creation of GKE based clusters
/kind feature
Describe the solution you'd like Based a query from slack (with follow up comments here, it would be good to support the creation of GKE based clusters. This would bring CAPG inline with CAPA and CAPZ, both of which support creating unmanaged and managed clusters
Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]
I implemented the EKS control plane in CAPA and with hindsight there are a few things i would do differently if i started again. These learnings may be useful to any implementation of this.
/assign
There is a discussion ongoing upstream in CAPI on "what does managed k8s look like in CAPI?". The AKS and EKS implementations are similar with some subtle differences. Its probably best to wait until those discussions have completed before we get too far with this implementation....but it shouldn't block and we could start.
We'll need to implement machine pools (#297) as part of this.
I would like to contribute to this project for GSOC 2022 so can you help me by telling how I should get started with it?
@Mayank-KS @jayesh-srivastava To start with i'd start learning the core concepts of CAPI. So going through the docs and going through getting started.
For this change, it's important to understand the different responsibilities and contracts for:
-
Cluster
- https://cluster-api.sigs.k8s.io/developer/architecture/controllers/cluster.html
- https://cluster-api.sigs.k8s.io/developer/providers/cluster-infrastructure.html
-
Control Plane
- https://cluster-api.sigs.k8s.io/developer/architecture/controllers/control-plane.html
And also more generally:
/cc @pydctw
Related: Managed Kubernetes proposal in CAPI
We'll need to implement machine pools (#297) as part of this.
@richardcase Hey. I am trying to understand the relation of GCPMachinePool
in adding GKE support. As per the manged node group for worker nodes in above doc, GCPManagedMachinePool
represents the node pools. Hence gcp-managed-machine-pool controller will take care of creating/deleting node pools, and internally instance groups are created by Google cloud/GKE.
As per my understanding, gcp-machine-pool corresponds to the unmanged GCP cluster.
Can you explain need of #297 for GKE support?
I am new to CAPI and CAPG, so might be missing something here. Thanks in advance.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
I'm unlikely to have the spare time to complete this myself, so:
/unassign
I'd be happy to mentor someone implementing this though (based on the advice from the proposal).
I have now changed jobs so:
/assign /lifecycle active
@richardcase: GitHub didn't allow me to assign the following users: richardchen-db.
Note that only kubernetes-sigs members with read permissions, repo collaborators and people who have commented on this issue/PR can be assigned. Additionally, issues/PRs can only have 10 assignees at the same time. For more information please see the contributor guide
In response to this:
/assign richardchen-db
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
I'll work with @richardcase on this
/assign richardchen331
Great, we are both assigned node
@richardchen331 I also want to be a part of this, would like to assist you and @richardcase, and want to understand the whole process for this implementation if that's okay.
@jayesh-srivastava - we have started to break the work down, see the task list in the description ^^^^^, and this was based on the proposal in #764.
Shall the 3 of us have a call to discuss?
@richardcase Thanks. Sure, I'm up for a call.
I want to work on some of the parts too. I will start looking at the proposal and if there is some call I am happy to join.
hello @richardcase, I am new to contributing and would like to take this opportunity to understand the process and also help you and others on this feature. Request you to include me on any follow up discussions.
Hey @richardcase, I'm also interested in working on it. I have some previous experience working on CAPZ (cluster-api-provider-azure). Are the tasks already slit up?
PR4 has now merged, so thats all the base reconcilers. PR5 is open to enable the controllers / webhooks via feature flag.
Work on the e2e is ongoing (see PR #844), once thats complete we are done with the initial implementation
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
This has been done so:
/close
@richardcase: Closing this issue.
In response to this:
This has been done so:
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.