gateway-api icon indicating copy to clipboard operation
gateway-api copied to clipboard

Make listener port optional

Open markmc opened this issue 3 years ago • 5 comments

There are multiple cases where it may make sense to not require a gateway owner to specify a listener port:

  1. Where a port number is irrelevant or nonsensical - #1052 is an example AIUI
  2. Where the port is well-known and can be derived from the listener protocol - e.g. where protocol==HTTP default to port 80 if unspecified
  3. Where a port should be auto-allocated for this listener - e.g. similar to auto-allocating an ephemeral port if you call bind() with the port specified as 0

One proposal is to specify that an unspecified port should be defined to have per-protocol semantics - so e.g. we'd define defaults for HTTP and HTTPS, but TLS, TCP, and UDP would be auto-allocated a random port. Implementation-specific protocols would define their own default behaviors.

Another alternative would be to use port==0 for auto-allocating a random port, and only have defaults for protocols where one makes sense.

See also #1060, #818, and Gateway API: Multi Port Approach

markmc avatar Mar 22 '22 17:03 markmc

Quoting #818

Although we discussed the possibility of preemptively making port optional prior to beta graduation, that would open a rather complex topic of how an empty/unspecified port should be interpreted. Instead we'll leave this decision for a later release.

As part of the upcoming release, we'll update our implementation and versioning guidelines to emphasize that required fields may become optional in future releases of the API. (This is already stated in https://gateway-api.sigs.k8s.io/concepts/versioning/#api-authors, but may not be highlighted clearly enough).

markmc avatar Mar 22 '22 17:03 markmc

#1066 adds docs for implementors about the possibility of required fields become optional in future:

Handle fields that have transitioned from required to optional without crashing

markmc avatar Mar 24 '22 13:03 markmc

Gateway resource consists of two parts:

  1. Network configuration that is closely tied to the underlying infrastructure. Ports, IPs and hostnames are good example of this category.
  2. Routing configuration such as TLS, route selectors, etc.

Of these, most/all use-cases consider (2) to be the source of truth and a Gateway should lead and not follow.

(1) is more complicated. In most of our designs up until this point, we have focused on Gateway resource "representing" a network endpoint. Up until now, we have assumed that such a network endpoint exists or is managed out of band, and the Gateway resource reflects the state of that resource and the API has no semantics around that.

There are two class large classes of use-cases

  • Treat networking configuration such as ports in the Gateway as the source of truth. Reconcile the rest of the infrastructure to this. This is similar to throw Service resource works and has been the focus so far.
  • Gateway represents the existing state, which is updated based on what exists in the underlying infrastructure. We have not focused much on the use-cases of these classes or waived them off with "another controller could be creating the gateway resource". I've seen some use-cases from proxy vendors, mesh vendors and L4-level use-cases that run into this problem.

I don't propose a solution here. I want to express my understanding and validate it with how others are thinking about it. cc @shaneutt who has done some thinking around these parts.

cc @robscott @youngnick

hbagdi avatar Apr 01 '22 23:04 hbagdi

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jul 01 '22 00:07 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Jul 31 '22 00:07 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

k8s-triage-robot avatar Aug 30 '22 01:08 k8s-triage-robot

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Aug 30 '22 01:08 k8s-ci-robot