hyperconverged-cluster-operator
hyperconverged-cluster-operator copied to clipboard
ssp: prpoagate tekton feature gate to SSP
What this PR does / why we need it: This PR propagates the value of tekton feature gate to SSP. This is still needed to fix: https://issues.redhat.com/browse/CNV-33012
After the bug fix, user can intentionally remove all tekton resources by setting the feature gate to nil
or false
.
This change will make the behavior after update from previous version less surprising to the user. It will still be possible to intentionally remove tekton resources without deleting SSP.
Reviewer Checklist
Reviewers are supposed to review the PR for every aspect below one by one. To check an item means the PR is either "OK" or "Not Applicable" in terms of that item. All items are supposed to be checked before merging a PR.
- [ ] PR Message
- [ ] Commit Messages
- [ ] How to test
- [ ] Unit Tests
- [ ] Functional Tests
- [ ] User Documentation
- [ ] Developer Documentation
- [ ] Upgrade Scenario
- [ ] Uninstallation Scenario
- [ ] Backward Compatibility
- [ ] Troubleshooting Friendly
Jira Ticket:
https://issues.redhat.com/browse/CNV-33012
Release note:
None
Pull Request Test Coverage Report for Build 9028782171
Details
- 3 of 3 (100.0%) changed or added relevant lines in 1 file are covered.
- No unchanged relevant lines lost coverage.
- Overall coverage increased (+0.02%) to 85.86%
Totals | |
---|---|
Change from base Build 9007403699: | 0.02% |
Covered Lines: | 5204 |
Relevant Lines: | 6061 |
💛 - Coveralls
hco-e2e-upgrade-prev-operator-sdk-sno-aws lane succeeded. /override ci/prow/hco-e2e-upgrade-prev-operator-sdk-sno-azure
@hco-bot: Overrode contexts on behalf of hco-bot: ci/prow/hco-e2e-upgrade-prev-operator-sdk-sno-azure
In response to this:
hco-e2e-upgrade-prev-operator-sdk-sno-aws lane succeeded. /override ci/prow/hco-e2e-upgrade-prev-operator-sdk-sno-azure
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
hco-e2e-upgrade-operator-sdk-sno-azure lane succeeded. /override ci/prow/hco-e2e-upgrade-operator-sdk-sno-aws
@hco-bot: Overrode contexts on behalf of hco-bot: ci/prow/hco-e2e-upgrade-operator-sdk-sno-aws
In response to this:
hco-e2e-upgrade-operator-sdk-sno-azure lane succeeded. /override ci/prow/hco-e2e-upgrade-operator-sdk-sno-aws
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/retest
/cc @ksimon1 @0xFelix
hco-e2e-consecutive-operator-sdk-upgrades-aws lane succeeded. /override ci/prow/hco-e2e-consecutive-operator-sdk-upgrades-azure hco-e2e-upgrade-prev-operator-sdk-aws lane succeeded. /override ci/prow/hco-e2e-upgrade-prev-operator-sdk-azure
@hco-bot: Overrode contexts on behalf of hco-bot: ci/prow/hco-e2e-consecutive-operator-sdk-upgrades-azure, ci/prow/hco-e2e-upgrade-prev-operator-sdk-azure
In response to this:
hco-e2e-consecutive-operator-sdk-upgrades-aws lane succeeded. /override ci/prow/hco-e2e-consecutive-operator-sdk-upgrades-azure hco-e2e-upgrade-prev-operator-sdk-aws lane succeeded. /override ci/prow/hco-e2e-upgrade-prev-operator-sdk-azure
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
hco-e2e-upgrade-prev-operator-sdk-aws lane succeeded. /override ci/prow/hco-e2e-upgrade-prev-operator-sdk-azure
@hco-bot: Overrode contexts on behalf of hco-bot: ci/prow/hco-e2e-upgrade-prev-operator-sdk-azure
In response to this:
hco-e2e-upgrade-prev-operator-sdk-aws lane succeeded. /override ci/prow/hco-e2e-upgrade-prev-operator-sdk-azure
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
hco-e2e-operator-sdk-sno-azure lane succeeded. /override ci/prow/hco-e2e-operator-sdk-sno-aws
@hco-bot: Overrode contexts on behalf of hco-bot: ci/prow/hco-e2e-operator-sdk-sno-aws
In response to this:
hco-e2e-operator-sdk-sno-azure lane succeeded. /override ci/prow/hco-e2e-operator-sdk-sno-aws
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
hco-e2e-upgrade-prev-operator-sdk-aws lane succeeded. /override ci/prow/hco-e2e-upgrade-prev-operator-sdk-azure hco-e2e-consecutive-operator-sdk-upgrades-aws lane succeeded. /override ci/prow/hco-e2e-consecutive-operator-sdk-upgrades-azure hco-e2e-operator-sdk-azure lane succeeded. /override ci/prow/hco-e2e-operator-sdk-aws hco-e2e-operator-sdk-azure lane succeeded. /override ci/prow/hco-e2e-operator-sdk-gcp
@hco-bot: Overrode contexts on behalf of hco-bot: ci/prow/hco-e2e-consecutive-operator-sdk-upgrades-azure, ci/prow/hco-e2e-operator-sdk-aws, ci/prow/hco-e2e-operator-sdk-gcp, ci/prow/hco-e2e-upgrade-prev-operator-sdk-azure
In response to this:
hco-e2e-upgrade-prev-operator-sdk-aws lane succeeded. /override ci/prow/hco-e2e-upgrade-prev-operator-sdk-azure hco-e2e-consecutive-operator-sdk-upgrades-aws lane succeeded. /override ci/prow/hco-e2e-consecutive-operator-sdk-upgrades-azure hco-e2e-operator-sdk-azure lane succeeded. /override ci/prow/hco-e2e-operator-sdk-aws hco-e2e-operator-sdk-azure lane succeeded. /override ci/prow/hco-e2e-operator-sdk-gcp
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
hco-e2e-kv-smoke-gcp lane succeeded. /override ci/prow/hco-e2e-kv-smoke-azure
@hco-bot: Overrode contexts on behalf of hco-bot: ci/prow/hco-e2e-kv-smoke-azure
In response to this:
hco-e2e-kv-smoke-gcp lane succeeded. /override ci/prow/hco-e2e-kv-smoke-azure
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/retest
@akrejcir - I'm not sure I understand the logic here.
If I understand correctly, this PR reuse the old FG in order to clean takton resources, so the user should actively set it to false. Not sure I understand why this is needed. Are we talking about the upgrade usecase, where the FG was set to true in the previous version? If yes, please add this to the PR description. Anyway, as the code is written now, there is no way not to set the FG. if you'll remove it, it will re-set to false, because this is the default value.
/hold
It's a pointer so it should be possible to not set it, right?
This is intended as an optional way to clean up old Tekton tasks which may remain on the cluster but are deprecated and not required anymore.
This change will make the behavior after update less surprising to the user. In the old version, the feature gate behaved as:
-
true
- SSP creates tekton resources -
false
- SSP removes created tekton resources.
In the new version with this change:
-
true
- SSP does not create tekton resources, only adds a deprecated annotation. -
false
- SSP removes tekton resources that were created by the previous version.
Without this PR, user would have to manually remove old tekton resources, or uninstall SSP.
The feature gate in SSP is not a pointer, which is ok. Not setting it is the same as setting it to false
.
It's a pointer so it should be possible to not set it, right?
This is intended as an optional way to clean up old Tekton tasks which may remain on the cluster but are deprecated and not required anymore.
But we have CRD defaults there, setting this FG to be false by default. It will never be empty.
Also, to me it seems like an abuse of the API, and I really don't like it.
See lines
- https://github.com/kubevirt/hyperconverged-cluster-operator/blob/8144382ca03156cadc4024c9ffe767503daaa08f/api/v1beta1/hyperconverged_types.go#L69
- https://github.com/kubevirt/hyperconverged-cluster-operator/blob/8144382ca03156cadc4024c9ffe767503daaa08f/api/v1beta1/hyperconverged_types.go#L424-L425
- https://github.com/kubevirt/hyperconverged-cluster-operator/blob/8144382ca03156cadc4024c9ffe767503daaa08f/api/v1beta1/hyperconverged_types.go#L832
But we have CRD defaults there, setting this FG to be false by default. It will never be empty.
It is ok if the FG is never empty. false
is the same as nil
for this FG.
Also, to me it seems like an abuse of the API, and I really don't like it.
Why do you think it is an API abuse? In my opinion it is close to the previous functionality. The only difference is that true
value does not do anything.
Do you maybe have a different idea how to do this? Or if you think it is a bad design, we can not implement it.
New changes are detected. LGTM label has been removed.
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: Once this PR has been reviewed and has the lgtm label, please assign nunnatsa for approval. For more information see the Kubernetes Code Review Process.
The full list of commands accepted by this bot can be found here.
Approvers can indicate their approval by writing /approve
in a comment
Approvers can cancel approval by writing /approve cancel
in a comment
Quality Gate passed
Issues
0 New issues
0 Accepted issues
Measures
0 Security Hotspots
No data about Coverage
0.7% Duplication on New Code
/retest
/unhold