Use unique Listener hostnames in modify listeners tests
What type of PR is this?
/kind cleanup /area conformance
What this PR does / why we need it:
Not all implementations can support multiple Gateways with identical Listeners. This change ensure implementations that do not create separate control/data planes per Gateway and that do not merge Gateways can run this test.
Which issue(s) this PR fixes:
N/A
Does this PR introduce a user-facing change?:
NONE
[APPROVALNOTIFIER] This PR is APPROVED
This pull-request has been approved by: sunjayBhatia
The full list of commands accepted by this bot can be found here.
The pull request process is described here
- ~~conformance/OWNERS~~ [sunjayBhatia]
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
e.g. for https://github.com/vmware/load-balancer-and-ingress-services-for-kubernetes this test is a bit problematic without this type of change
SGTM, would like one of the other conformance reviewers to check though.
This change ensure implementations that do not create separate control/data planes per Gateway and that do not merge Gateways can run this test
As a bit of clarification on my terminology in the rest of this comment, I consider anything where Gateways are sharing the same underlying dataplane to be a form of Gateway merging. I think you're referring to the ability to merge overlapping Listeners together as merging, which is a smaller subset than what I'm referring to when I say "Gateway merging".
I'm not really sure about this one, we should likely discuss it at the next community meeting. In general, I think Gateways are intended to be isolated by default, and should only be merged if some kind of opt-in occurs.
This feels like it's really pushing back against one of the fundamental ideas/promises of Gateway API - that by default a Gateway should represent a separate data plane. Of course there are some use cases where users may want to merge Gateways together (https://github.com/kubernetes-sigs/gateway-api/pull/3213), and the API leaves some room for that. At the same time I'm not sure we want that optional feature to lead to weaker guarantees from the API about what it means to be a conformant implementation.
This one seems like it could use more discussion, adding to agenda for next week's community meeting.
/cc
Following up on today's meeting; The current spec explicitly mentions that in order to merge Gateways, the Listeners have to be distinct - https://github.com/kubernetes-sigs/gateway-api/blob/45ab52e94fc5aa981ed96d79d6640446f2c8ffe2/apis/v1/gateway_types.go#L72-L81
I understand it blocks implementation from being able to be conformant, however, with the current spec in place I actually feel like we need a dedicated test case to check that non-distinct listeners are not merged.
This does not mean we cannot open the discussion on whether this spec needs a change, but I think its a separate discussion.
Following up on today's meeting; The current spec explicitly mentions that in order to merge Gateways, the Listeners have to be distinct -
https://github.com/kubernetes-sigs/gateway-api/blob/45ab52e94fc5aa981ed96d79d6640446f2c8ffe2/apis/v1/gateway_types.go#L72-L81
I understand it blocks implementation from being able to be conformant, however, with the current spec in place I actually feel like we need a dedicated test case to check that non-distinct listeners are not merged.
This does not mean we cannot open the discussion on whether this spec needs a change, but I think its a separate discussion.
Yeah I have no issue with there being an explicit test for "non-distinct" listeners etc. but I think it should itself be a distinct test from the Gateway modify listener test that is being modified here
Generally, the original conformance test may have needed a more universal viewpoint on listener merging (which we all agree is under specified) . The follow-up issue https://github.com/kubernetes-sigs/gateway-api/issues/1842 addressed some concerns but mentions that we still have open concerns in https://github.com/kubernetes-sigs/gateway-api/issues/1713.
However, I notice that the second gateway has a second listener:
- name: http
port: 80
protocol: HTTP
allowedRoutes:
namespaces:
from: All
Isn't that enough to make this Gateway distinct, and not an "identical" Gateway?
@sunjayBhatia: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:
| Test name | Commit | Details | Required | Rerun command |
|---|---|---|---|---|
| pull-gateway-api-crds-validation-4 | f2e6defa55a713c5bf2607680ab1fb6917ae2219 | link | true | /test pull-gateway-api-crds-validation-4 |
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.
The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle rotten - Close this PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Reopen this PR with
/reopen - Mark this PR as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closed this PR.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closedYou can:
- Reopen this PR with
/reopen- Mark this PR as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.