cluster-api-provider-aws
cluster-api-provider-aws copied to clipboard
[CAPI] Clusterctl Upgrade Spec [from latest v1beta1 release to v1beta2] Should create a management cluster and then upgrade all the providers
/kind bug
What steps did you take and what happened:
This issue relates with failures seen in cluster 88be0e7359055742becb
Error text
Timed out after 1200.000s.
Expected
<bool>: false
to be true
[FAILED] Timed out after 1200.000s.
Expected
<bool>: false
to be true
In [AfterEach] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/[email protected]/framework/cluster_helpers.go:176 @ 04/08/24 10:45:47.01
Recent failures: 4/18/2024, 5:13:56 AM periodic-cluster-api-provider-aws-e2e-eks-canary 4/17/2024, 7:01:21 PM ci-cluster-api-provider-aws-e2e 4/17/2024, 5:12:56 PM periodic-cluster-api-provider-aws-e2e-eks-canary 4/17/2024, 10:27:57 AM periodic-cluster-api-provider-aws-e2e 4/17/2024, 6:57:59 AM periodic-cluster-api-e2e-release-1-6
What did you expect to happen:
No tineout
Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]
Environment:
- Cluster-api-provider-aws version:
- Kubernetes version: (use
kubectl version): - OS (e.g. from
/etc/os-release):
This issue is currently awaiting triage.
If CAPA/CAPI contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.
The triage/accepted label can be added by org members by writing /triage accepted in a comment.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
Issue is still seen https://storage.googleapis.com/k8s-triage/index.html?text=cluster_helpers.go%3A176
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/kind flake
AWS specific ? "3 failures out of 135821 builds from 8/23/2024, 2:00:01 AM to 9/6/2024, 6:07:32 AM., according to https://storage.googleapis.com/k8s-triage/index.html?text=v1.6.1%2Fframework%2Fcluster_helpers.go%3A176
@dims perhaps close due to the small occurrence ?