gateway-api
gateway-api copied to clipboard
Provide Clarification for Collapsing Compatible Listeners Across Gateways
What would you like to be added: Either update the current spec for collapsing compatible Listeners across Gateways or update conformance tests to support existing guidance for collapsing compatible Listeners across Gateways.
Why this is needed: To aid implementors in implementing the API.
Background: Currently, the v1beta1 Gateway spec states:
An implementation MAY also group together and collapse compatible Listeners belonging to different Gateways.
The spec provides an example of how an implementation might consider Listeners to be compatible. In this example, Protocol must be the same and Hostname must be unique for each Listener. However, conformance tests are not consistent with this example, as they create 3 Gateways that have the same Protocol and Hostname for each Listener.
IMO this is an issue with the implementation/spec, not the tests. The test has defined a perfectly valid set of listeners which cannot be collapsed. There are two correct ways to handle this as an implementation:
- Don't collapse in this case
- Collapse at first, then split once a conflict is detected
2 Seems like a bad idea/complex.
For (1), this doesn't necessarily mean never collapse - but rather to make the user explicitly opt into it for that gateway.
We do this in Istio by https://istio.io/latest/docs/tasks/traffic-management/ingress/gateway-api/#manual-deployment.
I'd certainly be happy if there was a more standardized way to represent that
The test has defined a perfectly valid set of listeners which cannot be collapsed.
I agree that the tests provide a perfectly valid set of listeners if multi-gateway listener collapsing is not supported by the spec. Otherwise, an implementation that follows the spec will mark each of the conformance test Gateways as Ready=False due to conflicted listeners. This is why I state:
... or update conformance tests to support existing guidance for collapsing compatible Listeners across Gateways.
Without updating the conformance tests, implementors that follow the multi-gateway listener collapsing spec cannot use the provided base manifests b/c the 3 Gateways have incompatible listeners. I suggest providing separate manifests that adhere to the multi-gateway listener collapsing spec, e.g. use separate listener.hostname for each Gateway.
I don't think its following the spec:
An implementation MAY also group together and collapse compatible Listeners belonging to different Gateways.
Its a bit ambiguous but to me this means if they are not compatible you cannot collapse them, not that they should just fail.
Its a bit ambiguous but to me this means if they are not compatible you cannot collapse them, not that they should just fail.
Agreed. So an implementation doesn't collapse the listeners since they're not compatible for collapsing, leaving 1 Gateway in a "Ready=True" state and 2 Gateways in a "Ready=False" state. @howardjohn as you mention in https://github.com/kubernetes-sigs/gateway-api/issues/1385#issuecomment-1244718632, I think it's a bad idea for an implementation to "Collapse at first, then split once a conflict is detected".
From my perspective, the concept of collapsing listeners is present to allow for implementations to provision infrastructure in more efficient ways when it's possible. It was not intended to say that all listeners in a cluster should be collapsible to a single underlying Gateway, or that all listeners defined by conformance tests would be collapsible to a single underlying Gateway. Sorry for any confusion here, we should clarify the relevant docs because they are a bit confusing.
So an implementation doesn't collapse the listeners since they're not compatible for collapsing, leaving 1 Gateway in a "Ready=True" state and 2 Gateways in a "Ready=False" state.
I believe most implementations are provisioning new infrastructure for each Gateway, that's what I'd recommend here. For in-cluster implementations that's often a new instance of a controller that is responsible for implementing that specific Gateway, often paired with a new service type=LB.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.