cluster-api icon indicating copy to clipboard operation
cluster-api copied to clipboard

E2E framework should fail faster when terminal provisioning failures occur

Open detiber opened this issue 4 years ago • 4 comments

There is currently no verification of progress when checking for all control plane machines to be available, all machine deployment machines to be available, scaling operations for KCP/MD, and rolling out updates for KCP/MD today, so if there is a terminal failure (failure message/reason set on an owned Machine), then ;you need to wait for the wait-machine-upgrade, wait-control-plane, and/or wait-worker-nodes timeout to trigger. It would be nice if there was also a separate timeout for progress to be made that would help these types of failures to cause the test to fail quicker and in an easier to debug way.

Originally posted by @detiber in https://github.com/kubernetes-sigs/cluster-api/issues/6143#issuecomment-1041593793

detiber avatar Mar 02 '22 15:03 detiber

/milestone v1.2 /area testing

This is somehow related to the ongoing discussion about how to report terminal failures, e.g https://github.com/kubernetes-sigs/cluster-api/pull/6218

fabriziopandini avatar Mar 02 '22 16:03 fabriziopandini

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar May 31 '22 16:05 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Jun 30 '22 17:06 k8s-triage-robot

/triage accepted /help-wanted

fabriziopandini avatar Aug 05 '22 17:08 fabriziopandini

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

k8s-triage-robot avatar Sep 04 '22 18:09 k8s-triage-robot

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Sep 04 '22 18:09 k8s-ci-robot

/reopen

I think this is still a nice to have if someone has time to take it on.

killianmuldoon avatar Sep 14 '22 10:09 killianmuldoon

@killianmuldoon: Reopened this issue.

In response to this:

/reopen

I think this is still a nice to have if someone has time to take it on.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Sep 14 '22 10:09 k8s-ci-robot

/remove-lifecycle rotten (afaik it's otherwise just closed again when the job runs the next time)

sbueringer avatar Sep 14 '22 10:09 sbueringer

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Dec 13 '22 11:12 k8s-triage-robot

/lifecycle frozen

fabriziopandini avatar Dec 14 '22 16:12 fabriziopandini

(doing some cleanup on old issues without updates) /close unfortunately, no one is picking up the task. the thread will remain available for future reference

fabriziopandini avatar Mar 24 '23 19:03 fabriziopandini

@fabriziopandini: Closing this issue.

In response to this:

(doing some cleanup on old issues without updates) /close unfortunately, no one is picking up the task. the thread will remain available for future reference

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Mar 24 '23 19:03 k8s-ci-robot