cluster-api-provider-aws icon indicating copy to clipboard operation
cluster-api-provider-aws copied to clipboard

Support a list of instance types in AWSManagedMachinePool

Open pydctw opened this issue 3 years ago • 20 comments
trafficstars

/kind feature

Describe the solution you'd like AWSManagedMachinePool has a field, instanceType, for specifying a single instance type. We should support a list of instance types that the underlying node group should use.

This is primarily useful for spec.capacityType: spot, where specifying a set of instances increases your odds of finding an adequate instance, but there's nothing stopping you from using it with spec.capacityType: onDemand. More here https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html#managed-node-group-capacity-types

See eks: allow specifying multiple instance types for AWSManagedMachinePool for the initial discussion. Thanks @jashandeep-sohi for starting this!

Anything else you would like to add: The implementation should wait for a launch template support for AWSManagedMachinePool.

  • https://github.com/kubernetes-sigs/cluster-api-provider-aws/pull/3094

Environment:

  • Cluster-api-provider-aws version:
  • Kubernetes version: (use kubectl version):
  • OS (e.g. from /etc/os-release):

pydctw avatar Jul 12 '22 15:07 pydctw

/triage accepted /priority important-longterm /area provider/eks

sedefsavas avatar Jul 13 '22 21:07 sedefsavas

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Oct 11 '22 22:10 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Nov 10 '22 22:11 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Dec 10 '22 23:12 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Dec 10 '22 23:12 k8s-ci-robot

/remove-lifecycle rotten

com6056 avatar Dec 13 '22 03:12 com6056

/reopen

com6056 avatar Dec 13 '22 03:12 com6056

@com6056: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Dec 13 '22 03:12 k8s-ci-robot

/reopen

pydctw avatar Dec 13 '22 08:12 pydctw

@pydctw: Reopened this issue.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Dec 13 '22 08:12 k8s-ci-robot

Is there any timeline for this feature to get added? Using spot instances is extremely difficult right now without this feature along with https://github.com/kubernetes-sigs/cluster-api-provider-aws/issues/2574 when there are spot capacity constraints.

com6056 avatar Feb 14 '23 01:02 com6056

/milestone v2.3.0

richardcase avatar Jul 10 '23 16:07 richardcase

Any idea when this is supposed to be implemented?

idanshaby avatar Oct 31 '23 09:10 idanshaby