cluster-api-provider-aws
cluster-api-provider-aws copied to clipboard
Allow empty AMI ID, Instance type in launch template of AWSManagedMachinePool
/kind feature
We try to adopt CAPI+CAPA for our cluster to have same configuration, but find something not supported in the launch template.
Describe the solution you'd like
In the AWSManagedMachinePool spec
- when
AMI typeis defined(exceptCUSTOMtype), the controller should create launch template without ami id. This allows EKS to use managed AMI with launch template - when
Instance type(s)is defined, the controller should create launch template without instance type(s).
Customizing managed nodes with launch templates
this is a launch template in our running EKS node group, we use to customize the security group and storage
Anything else you would like to add:
- if fine, we're happy to submit PR for implementation.
- do we also have to review the proposal doc?
Environment:
- Cluster-api-provider-aws version:
- Kubernetes version: (use
kubectl version): - OS (e.g. from
/etc/os-release):
/triage accepted /priority important-soon
/milestone v2.3.0
/milestone v2.4.0
to match the delayed PR https://github.com/kubernetes-sigs/cluster-api-provider-aws/pull/4388
@AndiDog: You must be a member of the kubernetes-sigs/cluster-api-provider-aws-maintainers GitHub team to set the milestone. If you believe you should be able to issue the /milestone command, please contact your Cluster API Provider AWS Maintainers and have them propose you as an additional delegate for this responsibility.
In response to this:
/milestone v2.4.0
to match the delayed PR https://github.com/kubernetes-sigs/cluster-api-provider-aws/pull/4388
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/milestone v2.4.0
This issue is labeled with priority/important-soon but has not been updated in over 90 days, and should be re-triaged.
Important-soon issues must be staffed and worked on either currently, or very soon, ideally in time for the next release.
You can:
- Confirm that this issue is still relevant with
/triage accepted(org members only) - Deprioritize it with
/priority important-longtermor/priority backlog - Close this issue with
/close
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/
/remove-triage accepted
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten