Fix or reduce frequency or switch off perma-failing jobs
Data as of June 3rd, 9:00 PM Eastern. Latest data can be found here: http://storage.googleapis.com/k8s-metrics/failures-latest.json
| CI Job | Days Failed |
|---|---|
| ci-test-infra-benchmark-demo | 676 |
| test-infra-fuzz | 604 |
| ci-test-infra-branchprotector | 236 |
| update-clusterfuzz-lite | 109 |
Additional Context: https://github.com/kubernetes/test-infra/issues/18600
/sig testing
ping @mpherman2 https://github.com/kubernetes/test-infra/pull/28598
I'm pretty sure benchmark-demo is failing on purpose. (e.g. to demo https://github.com/kubernetes/test-infra/blob/fb5096b6af8b4a3003ca79c20befff786a56b436/pkg/benchmarkjunit/README.md#demo-job)
The rest might be legit.
As @michelle192837 said, the benchmark-demo job will be failing until we rewrite it to not express failures in the tests it is demo'ing.
The branchprotector job should start passing now that the google-panic-assignment branch has been deleted from external-dns repo (just a false positive causing it to fail) (thank you @Priyankasaggu11929)
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.