cluster-api
cluster-api copied to clipboard
📖 Managed Kubernetes in CAPI proposal
What this PR does / why we need it:
This proposal discusses various options on how managed Kubernetes services could be represented in Cluster API by providers and makes a recommendation for new implementations. One of main goals/motivation of the proposal is to reach a consensus on how ClusterClass should be supported for managed Kubernetes by agreeing on the API option.
cc @richardcase
Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):
Fixes #6126
As discussed during 8/3/22 office hrs, could the reviewers take another look at it? @richardcase and I addressed most comments in the initial google doc and moved to this PR.
cc @alexeldeib, @CecileRobertMichon, @enxebre, @fabriziopandini, @jackfrancis, @sbueringer, @yastij
@joekr, @shyamradhakrishnan, could you also review the proposal as you are looking into managed k8s implementation for OCI provider?
cc @zmalik @mtougeron
thanks @pydctw looks great, dropped some comments.
We discussed the proposal during 8/17/22 office hrs. Could we get /lgtm from the reviewers?
@alexeldeib, @CecileRobertMichon, @enxebre, @fabriziopandini, @jackfrancis, @sbueringer, @yastij, @joekr, @shyamradhakrishnan
/lgtm
I've added some comments for consideration. Overall this lgtm, thanks for doing the hard work of documenting all of our learnings from implementing Managed Kubernetes, so that future providers will benefit for a common set of recommendations!
lgtm pending markdown linkchecker
@richardcase I think you have to change the links from html to md
lgtm pending markdown linkchecker
@richardcase I think you have to change the links from html to md
Thanks @sbueringer - i'm on it :+1:
Thank you!
/lgtm (+/- squash but we can do that before merge)
Can we please squash commits? /lgtm
@pydctw: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:
| Test name | Commit | Details | Required | Rerun command |
|---|---|---|---|---|
| pull-cluster-api-e2e-main | 478615abc65281aafd82ba1a95fd817b8371c7c0 | link | true | /test pull-cluster-api-e2e-main |
| pull-cluster-api-e2e-informing-main | 478615abc65281aafd82ba1a95fd817b8371c7c0 | link | false | /test pull-cluster-api-e2e-informing-main |
| pull-cluster-api-e2e-informing-ipv6-main | 478615abc65281aafd82ba1a95fd817b8371c7c0 | link | false | /test pull-cluster-api-e2e-informing-ipv6-main |
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
We added new commits to address comments for easy reviewing but now that most of the comments are addressed, squashed the commits.
We reached lgtm quorum from reviewers, lazy consensus until Friday August 26th starting now
/lgtm
I assume https://github.com/kubernetes-sigs/cluster-api/pull/6988#discussion_r951689359 is non-blocking and can be addressed in a follow-up if necessary.
/lgtm
We reached lgtm quorum from reviewers, lazy consensus until Friday August 26th starting now
The lazy consensus deadline passed. Could we merge the PR? cc @sbueringer @fabriziopandini
We reached lgtm quorum from reviewers, lazy consensus until Friday August 26th starting now
The lazy consensus deadline passed. Could we merge the PR? cc @sbueringer @fabriziopandini
Yes!
/approve
[APPROVALNOTIFIER] This PR is APPROVED
This pull-request has been approved by: sbueringer
The full list of commands accepted by this bot can be found here.
The pull request process is described here
- ~~OWNERS~~ [sbueringer]
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
Thx @pydctw & @richardcase for pushing this forward!! :tada: