origin
origin copied to clipboard
OSD-21708: Skip managed services pods that have limits but not requests
Managed service pods are setting hard limits, instead of requests, contrary to OCP conventions. Ref: https://github.com/openshift/enhancements/blob/master/CONVENTIONS.md#resources-and-limits
OSD-21708 tracks fixing these pods, meanwhile add exceptions so the test passes.
@stbenjam: This pull request references OSD-21708 which is a valid jira issue.
Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.16.0" version, but no target version was set.
In response to this:
OSD-21708 tracks fixing these pods, meanwhile add exceptions so the test passes.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.
@stbenjam: This pull request references OSD-21708 which is a valid jira issue.
Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.16.0" version, but no target version was set.
In response to this:
OSD-21708 tracks fixing these pods, meanwhile add exceptions so the test passes.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.
@stbenjam: This pull request references OSD-21708 which is a valid jira issue.
Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.16.0" version, but no target version was set.
In response to this:
Managed service pods are setting hard limits, instead of requests, contrary to OCP conventions. Ref: https://github.com/openshift/enhancements/blob/master/CONVENTIONS.md#resources-and-limits
OSD-21708 tracks fixing these pods, meanwhile add exceptions so the test passes.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.
/lgtm
[APPROVALNOTIFIER] This PR is APPROVED
This pull-request has been approved by: dgoodwin, stbenjam
The full list of commands accepted by this bot can be found here.
The pull request process is described here
- ~~test/extended/operators/OWNERS~~ [dgoodwin,stbenjam]
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
/retest-required
Remaining retests: 0 against base HEAD 6632b4ab46e9d179bd0d28d6853690e410079e5a and 2 for PR HEAD 437457d6f44d050b0535fd66026f7ebc20357134 in total
/retest-required
Remaining retests: 0 against base HEAD 0d6231f801b696aefb1c5b82a0949c9c1945a048 and 1 for PR HEAD 437457d6f44d050b0535fd66026f7ebc20357134 in total
/override ci/prow/e2e-metal-ipi-ovn-ipv6
Metal failure is unrelated
@stbenjam: Overrode contexts on behalf of stbenjam: ci/prow/e2e-metal-ipi-ovn-ipv6
In response to this:
/override ci/prow/e2e-metal-ipi-ovn-ipv6
Metal failure is unrelated
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/hold
Will update this to do it a little differently
Job Failure Risk Analysis for sha: 437457d6f44d050b0535fd66026f7ebc20357134
| Job Name | Failure Risk |
|---|---|
| pull-ci-openshift-origin-master-e2e-metal-ipi-ovn-ipv6 | High [sig-api-machinery] disruption/oauth-api connection/new should be available throughout the test This test has passed 100.00% of 9 runs on jobs ['periodic-ci-openshift-release-master-nightly-4.17-e2e-metal-ipi-ovn-ipv6' 'periodic-ci-openshift-release-master-nightly-4.16-e2e-metal-ipi-ovn-ipv6'] in the last 14 days. |
Issues go stale after 90d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.
If this issue is safe to close now please do so with /close.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.
If this issue is safe to close now please do so with /close.
/lifecycle rotten /remove-lifecycle stale
@stbenjam: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:
| Test name | Commit | Details | Required | Rerun command |
|---|---|---|---|---|
| ci/prow/e2e-agnostic-ovn-cmd | 437457d6f44d050b0535fd66026f7ebc20357134 | link | false | /test e2e-agnostic-ovn-cmd |
| ci/prow/e2e-aws-ovn-fips | 437457d6f44d050b0535fd66026f7ebc20357134 | link | unknown | /test e2e-aws-ovn-fips |
| ci/prow/e2e-aws-csi | 437457d6f44d050b0535fd66026f7ebc20357134 | link | false | /test e2e-aws-csi |
| ci/prow/e2e-aws-ovn-single-node-serial | 437457d6f44d050b0535fd66026f7ebc20357134 | link | false | /test e2e-aws-ovn-single-node-serial |
| ci/prow/e2e-aws-ovn-upgrade | 437457d6f44d050b0535fd66026f7ebc20357134 | link | false | /test e2e-aws-ovn-upgrade |
| ci/prow/e2e-gcp-ovn-rt-upgrade | 437457d6f44d050b0535fd66026f7ebc20357134 | link | false | /test e2e-gcp-ovn-rt-upgrade |
| ci/prow/e2e-aws-ovn-single-node-upgrade | 437457d6f44d050b0535fd66026f7ebc20357134 | link | false | /test e2e-aws-ovn-single-node-upgrade |
| ci/prow/e2e-metal-ipi-sdn | 437457d6f44d050b0535fd66026f7ebc20357134 | link | false | /test e2e-metal-ipi-sdn |
| ci/prow/e2e-aws-ovn-edge-zones | 437457d6f44d050b0535fd66026f7ebc20357134 | link | true | /test e2e-aws-ovn-edge-zones |
| ci/prow/e2e-gcp-ovn-builds | 437457d6f44d050b0535fd66026f7ebc20357134 | link | true | /test e2e-gcp-ovn-builds |
Full PR test history. Your PR dashboard.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.
/close
@openshift-bot: Closed this PR.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting
/reopen. Mark the issue as fresh by commenting/remove-lifecycle rotten. Exclude this issue from closing again by commenting/lifecycle frozen./close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.