kops icon indicating copy to clipboard operation
kops copied to clipboard

Kops Should Fail Fast and Early if Incompatible AWS Zones are Supplied

Open pluttrell opened this issue 9 years ago • 2 comments

If someone uses Kops to create a cluster and specifies a zone that doesn't have the default instance size, the cluster will ultimately fail and the error will be buried in the AWS console's ASG details. This makes for a pretty poor user experience. An example of this is us-east-1a which doesn't offer the default instance size that Kops uses. Some have also reported that Kops can't create subnets in us-east-1a, but I don't have that problem.

A better user experience for these type of incompatibilities would be to fail fast, early and with a clear message. For example: "Kops can not create a cluster with nodes in zone 'us-east-1a' of type 't2.medium' because that instance type is not available."

It feels like we need a better way to test if a users AWS will work as specified by their Kops config - be it manual or the defaults.

I don't know anything about the internals of how Kops works, but I wonder if a validation phase should be added before actual creation starts. An initial implementation would not need to include every single check as more could be added as they're requested or found to be a problem.

These validation checks might be implemented differently depending on what it would take to validate. To test if a subnet could be created in a particular zone, might take actually creating a test subnet.

Some might now want to execute the validation phase, for time or whatever reason, so perhaps add a --no-validate flag to Kops that would enable people to skip it.

pluttrell avatar Feb 24 '17 20:02 pluttrell

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta. /lifecycle stale

fejta-bot avatar Dec 21 '17 12:12 fejta-bot

/remove-lifecycle stale /lifecycle frozen

pluttrell avatar Jan 15 '18 07:01 pluttrell

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Oct 31 '22 18:10 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Nov 30 '22 19:11 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Dec 30 '22 20:12 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Dec 30 '22 20:12 k8s-ci-robot