Add SSA annotations to ClusterOperator status fields
Hello @mdbooth! Some important instructions when contributing to openshift/api: API design plays an important part in the user experience of OpenShift and as such API PRs are subject to a high level of scrutiny to ensure they follow our best practices. If you haven't already done so, please review the OpenShift API Conventions and ensure that your proposed changes are compliant. Following these conventions will help expedite the api review process for your PR.
I think this should be fairly innocuous given the expected users of this API so far. We can prove whether it works or not by using E2E on the cluster-config-operator. Can you raise a PR to bump the config operator to this revision and prove there that the clusters are still working?
@deads2k is working on making it so these CRDs are applied directly from this repository, but that is still WIP so as far as I understand, the CCO PR is still required
/hold Joel has some concerns that need to be addressed first
[APPROVALNOTIFIER] This PR is APPROVED
This pull-request has been approved by: mdbooth, soltysh
The full list of commands accepted by this bot can be found here.
The pull request process is described here
- ~~OWNERS~~ [soltysh]
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
Issues go stale after 90d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.
If this issue is safe to close now please do so with /close.
/lifecycle stale
Now that things have moved along, if we rebase this, we can test directly from this repo whether those potential issues I thought about are present, no need for an additional PR
New changes are detected. LGTM label has been removed.
I thought this was installed by the api repo but it looks like it might be installed by CVO, will need a bump to CVO to test if this is working or not
@mdbooth: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:
| Test name | Commit | Details | Required | Rerun command |
|---|---|---|---|---|
| ci/prow/e2e-aws-ovn | 9841978b25bdf06acf021e593be66c4c3853d108 | link | true | /test e2e-aws-ovn |
| ci/prow/e2e-aws-ovn-techpreview | 9841978b25bdf06acf021e593be66c4c3853d108 | link | true | /test e2e-aws-ovn-techpreview |
| ci/prow/e2e-upgrade | 9841978b25bdf06acf021e593be66c4c3853d108 | link | true | /test e2e-upgrade |
| ci/prow/e2e-aws-serial | 9841978b25bdf06acf021e593be66c4c3853d108 | link | true | /test e2e-aws-serial |
| ci/prow/e2e-aws-serial-techpreview | 9841978b25bdf06acf021e593be66c4c3853d108 | link | true | /test e2e-aws-serial-techpreview |
| ci/prow/e2e-aws-ovn-hypershift | 9841978b25bdf06acf021e593be66c4c3853d108 | link | true | /test e2e-aws-ovn-hypershift |
| ci/prow/e2e-upgrade-minor | 9841978b25bdf06acf021e593be66c4c3853d108 | link | true | /test e2e-upgrade-minor |
Full PR test history. Your PR dashboard.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.
Stale issues rot after 30d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.
If this issue is safe to close now please do so with /close.
/lifecycle rotten /remove-lifecycle stale
@mdbooth Do you have time to help complete this?
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.
/close
@openshift-bot: Closed this PR.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting
/reopen. Mark the issue as fresh by commenting/remove-lifecycle rotten. Exclude this issue from closing again by commenting/lifecycle frozen./close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
@mdbooth Do you have time to revive this?