add requirement for hostname change
Keep track of which certificates support the SNO hostname change tool and prevent new certificates from being added that have not confirmed this capability to avoid accidental regressions.
After testing for a particular TLS artifact is complete (requirements noted in the markdown), adding the annotation to a particular secret or configmap indicates that certificate is complete. Keep in mind that
- existing e2e testing does not necessarily cover every certificate and CA bundle
- this mechanism adds an e2e test that ensures new certificates must work with a hostname change prior to coming into the payload.
- acknowledgement is lightweight, see https://github.com/openshift/cluster-etcd-operator/pull/1159/files as a similar example.
- this has the benefit of ensuring that teams are aware of exactly how the hostname changes impact their certificates
/hold
cc @cuppett @romfreiman
[APPROVALNOTIFIER] This PR is APPROVED
This pull-request has been approved by: deads2k
The full list of commands accepted by this bot can be found here.
The pull request process is described here
- ~~OWNERS~~ [deads2k]
- ~~tls/violations/OWNERS~~ [deads2k]
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
@omertuc @mresvanis
PR needs rebase.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
In a relocation (image based upgrade/install) scenario we're changing four things:
- Hostname
- Node IP address
- Cluster name
- Cluster base domain
Is:
Hostname
Mentioned in the PR's description referring to all four things? Or is there some distinction needed here?
@deads2k: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:
| Test name | Commit | Details | Required | Rerun command |
|---|---|---|---|---|
| ci/prow/e2e-aws-ovn-single-node-upgrade | 98680e76344e24853d1e8913633eac42ba7ca256 | link | false | /test e2e-aws-ovn-single-node-upgrade |
| ci/prow/e2e-aws-ovn-single-node-serial | 98680e76344e24853d1e8913633eac42ba7ca256 | link | false | /test e2e-aws-ovn-single-node-serial |
| ci/prow/verify-deps | 98680e76344e24853d1e8913633eac42ba7ca256 | link | true | /test verify-deps |
| ci/prow/e2e-metal-ipi-ovn-ipv6 | 98680e76344e24853d1e8913633eac42ba7ca256 | link | true | /test e2e-metal-ipi-ovn-ipv6 |
| ci/prow/lint | 98680e76344e24853d1e8913633eac42ba7ca256 | link | true | /test lint |
| ci/prow/verify | 98680e76344e24853d1e8913633eac42ba7ca256 | link | true | /test verify |
| ci/prow/e2e-gcp-ovn-upgrade | 98680e76344e24853d1e8913633eac42ba7ca256 | link | true | /test e2e-gcp-ovn-upgrade |
| ci/prow/unit | 98680e76344e24853d1e8913633eac42ba7ca256 | link | true | /test unit |
| ci/prow/e2e-aws-ovn-fips | 98680e76344e24853d1e8913633eac42ba7ca256 | link | true | /test e2e-aws-ovn-fips |
| ci/prow/e2e-gcp-ovn | 98680e76344e24853d1e8913633eac42ba7ca256 | link | true | /test e2e-gcp-ovn |
| ci/prow/e2e-aws-ovn-serial | 98680e76344e24853d1e8913633eac42ba7ca256 | link | true | /test e2e-aws-ovn-serial |
Full PR test history. Your PR dashboard.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
Issues go stale after 90d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.
If this issue is safe to close now please do so with /close.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.
If this issue is safe to close now please do so with /close.
/lifecycle rotten /remove-lifecycle stale
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.
/close
@openshift-bot: Closed this PR.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting
/reopen. Mark the issue as fresh by commenting/remove-lifecycle rotten. Exclude this issue from closing again by commenting/lifecycle frozen./close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.