hyperconverged-cluster-operator
hyperconverged-cluster-operator copied to clipboard
Watch all the CMs in the operator namespace
As for https://bugzilla.redhat.com/show_bug.cgi?id=2063991 we have also to remove CMs that are not correctly configured with the right label and so are not matched by the client cache. Let's correctly watch all the CMs in the namespace. Improve the logging.
Bug-Url: https://bugzilla.redhat.com/show_bug.cgi?id=2063991
Signed-off-by: Simone Tiraboschi [email protected]
Reviewer Checklist
Reviewers are supposed to review the PR for every aspect below one by one. To check an item means the PR is either "OK" or "Not Applicable" in terms of that item. All items are supposed to be checked before merging a PR.
- [ ] PR Message
- [ ] Commit Messages
- [ ] How to test
- [ ] Unit Tests
- [ ] Functional Tests
- [ ] User Documentation
- [ ] Developer Documentation
- [ ] Upgrade Scenario
- [ ] Uninstallation Scenario
- [ ] Backward Compatibility
- [ ] Troubleshooting Friendly
Release note:
NONE
/cherry-pick release-1.6
@tiraboschi: once the present PR merges, I will cherry-pick it on top of release-1.6 in a new PR and assign it to you.
In response to this:
/cherry-pick release-1.6
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Kudos, SonarCloud Quality Gate passed!
0 Bugs
0 Vulnerabilities
0 Security Hotspots
0 Code Smells
No Coverage information
0.0% Duplication
Pull Request Test Coverage Report for Build 2953713455
- 4 of 7 (57.14%) changed or added relevant lines in 2 files are covered.
- No unchanged relevant lines lost coverage.
- Overall coverage increased (+0.009%) to 84.977%
Changes Missing Coverage | Covered Lines | Changed/Added Lines | % |
---|---|---|---|
pkg/util/util.go | 1 | 4 | 25.0% |
<!-- | Total: | 4 | 7 |
Totals | |
---|---|
Change from base Build 2948514166: | 0.009% |
Covered Lines: | 4463 |
Relevant Lines: | 5252 |
💛 - Coveralls
/retest
/retest
/retest
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
/retest
/rebase
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please ask for approval from tiraboschi by writing /assign @tiraboschi
in a comment. For more information see:The Kubernetes Code Review Process.
The full list of commands accepted by this bot can be found here.
Approvers can indicate their approval by writing /approve
in a comment
Approvers can cancel approval by writing /approve cancel
in a comment
Kudos, SonarCloud Quality Gate passed!
0 Bugs
0 Vulnerabilities
0 Security Hotspots
0 Code Smells
No Coverage information
0.0% Duplication
@tiraboschi: The following tests failed, say /retest
to rerun all failed tests or /retest-required
to rerun all mandatory failed tests:
Test name | Commit | Details | Required | Rerun command |
---|---|---|---|---|
ci/prow/hco-e2e-upgrade-index-sno-aws | fc33b4a8519b46be3e3e91309e11d58ee4518ea0 | link | false | /test hco-e2e-upgrade-index-sno-aws |
ci/prow/okd-hco-e2e-upgrade-index-aws | fc33b4a8519b46be3e3e91309e11d58ee4518ea0 | link | true | /test okd-hco-e2e-upgrade-index-aws |
ci/prow/hco-e2e-upgrade-prev-index-aws | fc33b4a8519b46be3e3e91309e11d58ee4518ea0 | link | true | /test hco-e2e-upgrade-prev-index-aws |
ci/prow/hco-e2e-upgrade-prev-index-sno-aws | fc33b4a8519b46be3e3e91309e11d58ee4518ea0 | link | false | /test hco-e2e-upgrade-prev-index-sno-aws |
ci/prow/okd-hco-e2e-upgrade-index-gcp | fc33b4a8519b46be3e3e91309e11d58ee4518ea0 | link | true | /test okd-hco-e2e-upgrade-index-gcp |
ci/prow/hco-e2e-upgrade-index-aws | fc33b4a8519b46be3e3e91309e11d58ee4518ea0 | link | true | /test hco-e2e-upgrade-index-aws |
ci/prow/hco-e2e-image-index-sno-aws | fc33b4a8519b46be3e3e91309e11d58ee4518ea0 | link | false | /test hco-e2e-image-index-sno-aws |
ci/prow/okd-hco-e2e-image-index-aws | fc33b4a8519b46be3e3e91309e11d58ee4518ea0 | link | true | /test okd-hco-e2e-image-index-aws |
ci/prow/hco-e2e-image-index-aws | fc33b4a8519b46be3e3e91309e11d58ee4518ea0 | link | true | /test hco-e2e-image-index-aws |
ci/prow/hco-e2e-image-index-gcp | fc33b4a8519b46be3e3e91309e11d58ee4518ea0 | link | true | /test hco-e2e-image-index-gcp |
ci/prow/hco-e2e-upgrade-index-azure | fc33b4a8519b46be3e3e91309e11d58ee4518ea0 | link | true | /test hco-e2e-upgrade-index-azure |
ci/prow/hco-e2e-upgrade-prev-index-sno-azure | fc33b4a8519b46be3e3e91309e11d58ee4518ea0 | link | false | /test hco-e2e-upgrade-prev-index-sno-azure |
ci/prow/hco-e2e-kv-smoke-gcp | fc33b4a8519b46be3e3e91309e11d58ee4518ea0 | link | true | /test hco-e2e-kv-smoke-gcp |
ci/prow/hco-e2e-kv-smoke-azure | fc33b4a8519b46be3e3e91309e11d58ee4518ea0 | link | true | /test hco-e2e-kv-smoke-azure |
ci/prow/hco-e2e-image-index-sno-azure | fc33b4a8519b46be3e3e91309e11d58ee4518ea0 | link | false | /test hco-e2e-image-index-sno-azure |
ci/prow/okd-hco-e2e-image-index-gcp | fc33b4a8519b46be3e3e91309e11d58ee4518ea0 | link | true | /test okd-hco-e2e-image-index-gcp |
ci/prow/hco-e2e-image-index-azure | fc33b4a8519b46be3e3e91309e11d58ee4518ea0 | link | true | /test hco-e2e-image-index-azure |
ci/prow/hco-e2e-upgrade-prev-index-azure | fc33b4a8519b46be3e3e91309e11d58ee4518ea0 | link | true | /test hco-e2e-upgrade-prev-index-azure |
ci/prow/hco-e2e-upgrade-index-sno-azure | fc33b4a8519b46be3e3e91309e11d58ee4518ea0 | link | false | /test hco-e2e-upgrade-index-sno-azure |
Full PR test history. Your PR dashboard.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
/close
@kubevirt-bot: Closed this PR.
In response to this:
Rotten issues close after 30d of inactivity. Reopen the issue with
/reopen
. Mark the issue as fresh with/remove-lifecycle rotten
./close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.