cluster-api-provider-aws
cluster-api-provider-aws copied to clipboard
API Evolution for VPC and Networking Topologies
/kind feature
Describe the solution you'd like There are differing options on how to run Kubernetes clusters in AWS, these include:
- IPv6 vs IPv4
- NAT gateways for internet connectivity, vs. DirectConnect vs. internal only
- A plethora of CNIs
- Multiple load balancer implementation for Kubernetes services
How does a user figure out what one to use, and how best can we enable them?
Related issues include: #931, #1208 , #1158 , #1062 , #1727
Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]
A proposal for this should include a evolutionary roadmap as far as the API goes. What are the most immediate concerns that can be addressed as additions to the v1alpha3 API, and what should be best considered as a breaking API changes.
@fabriziopandini if you have any thoughts on this, would be appreciated.
@randomvariable I have something similar in the radar, but I dubt we can workout details during this iteration rif https://github.com/kubernetes-sigs/cluster-api/issues/1729
- [ ] Prototype interactive mode for allowing users to set "on-the-fly" the variables to be injected in the yaml for providers components or providers templates
- [ ] Prototype a pluggable template system (vs supporting only variables substitution)
In order to ^^^, it should be defined a way for clusterctl to interact with each provider while creating the cluster template, because the providers are the owner of the knowledge of what can be configured or not, and considering this should be accepted by each provider, I assume this requires a CAEP
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/lifecycle frozen
Also relates to #1643 and #1323
@richardcase Would be useful to get the requirements for EKS down. I would have thought we can make EKS work with the existing topology.
@randomvariable - i'll start documenting the requirments. The current default topology doesn't work as the 2 subnets it creates (1 public & 1 private) are in a single az and EKS requires subnets in at least 2 azs.
https://docs.aws.amazon.com/eks/latest/userguide/network_reqs.html
It would include stuff from there and other requirements.
I've been asked to document our specific topology in here (ref: https://kubernetes.slack.com/archives/CD6U2V71N/p1588804729193200)
Our starting point in the account is that a Direct Connect Gateway (DXG) exists. What we would need CAPA be able to do is:
- Accept DXG ID as input (probably as part of the
AWSCluster NetworkSpec?) - Ability to create a Virtual Private Gateway/VPN Gateway/VGW (it has many names)
- Attach the VGW to the VPC once it's created
- Attach the VGW to the DXG (that's a very slow process, several minutes to reconcile)
- Accept an array of CIDRs to add route table entries for with a target of the VGW (or a flag to enable route propagation)
/assign
/assign @voor
@randomvariable: GitHub didn't allow me to assign the following users: voor.
Note that only kubernetes-sigs members, repo collaborators and people who have commented on this issue/PR can be assigned. Additionally, issues/PRs can only have 10 assignees at the same time. For more information please see the contributor guide
In response to this:
/assign @voor
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/lifecycle active (only from a gathering thoughts perspective)
Will be sharing a Google Doc with initial ideas in a little while. Will definitely be towards v1alpha4 though.
another use/corner(?) case maybe: transit gateway peerings
we spin up clusters per default with internal loadbalancers only:
to make them available to the corp intranet/vpn users etc. we are doing some transit gateway peering afterwards to make it:
- available to the internal network/routing
- ensure visibility to other parts of the infrastructure(ci/cd) in different aws accounts
- enable the new cluster to also reach other pieces of infra. like ci/cd or services in other clusters.
- also: we create a 2nd vpc which contains only rds instances for better separation of concerns.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten
/remove-lifecycle rotten
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
/lifecycle frozen
/triage accepted
Another use case is using only public subnets: https://github.com/kubernetes-sigs/cluster-api-provider-aws/issues/2997
kinda related https://github.com/kubernetes-sigs/cluster-api-provider-aws/issues/3035
/remove-lifecycle frozen
/milestone v2.x
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
Still relevant and related to https://github.com/kubernetes-sigs/cluster-api-provider-aws/issues/3711