cluster-api-provider-aws
cluster-api-provider-aws copied to clipboard
Support multiple endpoints for API (private + internet-facing)
/kind feature
Describe the solution you'd like
The current implementation of CAPA when using non-managed clusters only allows for the creation of either a private ELB or an internet-facing ELB for the Kubernetes API.
In contrast to this, when creating managed clusters it's possible to create both private and internet-facing endpoints (as this is a feature of EKS).
We'd like the ability to be able to create both types of endpoints which still using the non-managed clusters. Ref: https://github.com/giantswarm/roadmap/issues/492
It seems unlikely that CAPA would be able to introduce anything wildly different with the ongoing Load Balancer Provider proposal but it might be possible to cover this simple usecase (needing both private and internet-facing) by adding a new value to ClassicELBScheme with something like both
to indicate the desire to have both types created.
Anything else you would like to add:
There is some related issues in upstream CAPI:
- https://github.com/kubernetes-sigs/cluster-api/issues/5295
- https://github.com/kubernetes-sigs/cluster-api/issues/1250 (Proposal doc: https://docs.google.com/document/d/1wJrtd3hgVrUnZsdHDXQLXmZE3cbXVB5KChqmNusBGpE/edit)
@AverageMarcus: This issue is currently awaiting triage.
If CAPA/CAPI contributors determines this is a relevant issue, they will accept it by applying the triage/accepted
label and provide further guidance.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
I've created a draft PR to outline an approach that could be implemented without the need to wait for the load balancer provider proposal.
https://github.com/kubernetes-sigs/cluster-api-provider-aws/pull/2852
In today's CAPA meeting, @randomvariable mention that for this to be usable, CAPI needs to support multiple cluster endpoints (https://github.com/kubernetes-sigs/cluster-api/issues/5295). Does that sound right to you, @AverageMarcus?
Ideally, yes, but I'm not sure it's completely required. All cluster resources (e.g. worker nodes) would make use of the internal API endpoint and we could reference that as the ControlPlaneEndpoint
.
Things get a little messier when it comes to the Kubeconfig secret generated for the workload cluster. I'm not sure if there's any previous examples of the kubeconfig containing multiple entries but we could generate a secret containing two different kubeconfig contexts, one for each endpoint.
I don't want to add complexity to CAPA without us sorting out the problem of consuming these endpoints from a Cluster API perspective. What endpoint does a management cluster use depending on where it's located is not very clear from the proposed implementation.
Blocked by https://github.com/kubernetes-sigs/cluster-api/issues/5295
/triage accepted /priority important-longterm /milestone backlog
cc @lubronzhan
I assume CAPI still need corresponding change? CAPA could expose both endpoint in the cluster but CAPI would be responsible for generating kubeconfig for both endpoints.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
Can we get a
/lifecycle frozen
added to this to match the CAPI issue blocking this (https://github.com/kubernetes-sigs/cluster-api/issues/5295)
Edit: Didn't realise I had the ability to set the lifecycle 😁
/remove-lifecycle frozen
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
From triage 12/2022:
- Use case is to make kubelets use the private endpoint to avoid bandwidth egress charges, while allowing end users to use public endpoint.
- Even without core CAPI support for multiple endpoints, CAPA could create the infra for the private endpoint, and users could modify their kubeconfig to use it.
/triage accepted /priority important-longterm
This issue has not been updated in over 1 year, and should be re-triaged.
You can:
- Confirm that this issue is still relevant with
/triage accepted
(org members only) - Close this issue with
/close
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/
/remove-triage accepted
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten