gateway-api icon indicating copy to clipboard operation
gateway-api copied to clipboard

The TLS configuration cannot be placed in the Gateway's CR.

Open fanux opened this issue 2 years ago • 6 comments

In multi-tenant scenarios, different users have different domain names, and different domain names correspond to different certificates. If everyone updates the Gateway CR, it will inevitably cause mutual influence. If there are a thousand different domain names, there will be a thousand listeners in the Gateway. Therefore, a more reasonable approach is to configure TLS in the Httproute, or have a separate CRD to manage certificate configurations.

Now:

apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: eg
spec:
  gatewayClassName: eg
  listeners:
  - name: http
    port: 80
    protocol: HTTP
    # hostname: "*.example.com"
  - name: https
    port: 443
    protocol: HTTPS
    # hostname: "*.example.com"
    tls: 
      mode: Terminate
      certificateRefs:
      - kind: Secret
        name: example-com
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: backend
spec:
  parentRefs:
    - name: eg
      sectionName: https
  hostnames:
    - "www.example.com"
  rules:
    - backendRefs:
        - group: ""
          kind: Service
          name: backend
          port: 3000
          weight: 1
      matches:
        - path:
            type: PathPrefix
            value: /

Better way:

apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: eg
spec:
  gatewayClassName: eg
  listeners:
  - name: http
    port: 80
    protocol: HTTP
    # hostname: "*.example.com"
  - name: https
    port: 443
    protocol: HTTPS
    # hostname: "*.example.com"
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: backend
spec:
  parentRefs:
    - name: eg
      sectionName: https
  hostnames:
    - "www.example.com"
  tls: 
      mode: Terminate
      certificateRefs:
      - kind: Secret
        name: example-com
  rules:
    - backendRefs:
        - group: ""
          kind: Service
          name: backend
          port: 3000
          weight: 1
      matches:
        - path:
            type: PathPrefix
            value: /

OR:

apiVersion: gateway.networking.k8s.io/v1
kind: TLS
metadata:
  name: backend
spec:
  httproute: backend
  tls: 
      mode: Terminate
      certificateRefs:
      - kind: Secret
        name: example-com

fanux avatar Dec 11 '23 04:12 fanux

Or is there any good way to solve this problem currently? Our scenario is that there are tens of thousands of separate tenants, who may all have their own domain names and certificates to configure, and the Gateway is created uniformly by cluster management, so it is impossible for tenants to modify the listener. And each tenant needs to configure their own domain name certificate in their own namespace.

fanux avatar Dec 11 '23 04:12 fanux

PTAL: https://gateway-api.sigs.k8s.io/api-types/backendtlspolicy

Xunzhuo avatar Dec 12 '23 07:12 Xunzhuo

#749 It may be related to this.

yinxulai avatar Jan 31 '24 03:01 yinxulai

We specifically designed Gateways to limit the number of Listeners to encourage people to have smaller numbers of Listeners in each Gateway, and we decided not to have TLS config in HTTPRoutes because it's rightly a property of the Listener.

For the case that a cluster has thousands of tenants with different domain names, I'd recommend considering the Gateway part of the deployment and having an individual one per tenant, or sharding the tenants across Gateways.

I suspect that the problem here lies in the cost of handing out an IP (or other loadbalancer related resource) to each Gateway. #1713 is intended to add a standard way that you can have a single Gateway that holds the IP address and other config, and then merge other Gateways into that. That would allow a scenario like the one you describe more easily.

youngnick avatar Feb 29 '24 04:02 youngnick

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar May 29 '24 05:05 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Jun 28 '24 05:06 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Jul 28 '24 05:07 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

k8s-ci-robot avatar Jul 28 '24 05:07 k8s-ci-robot

We specifically designed Gateways to limit the number of Listeners to encourage people to have smaller numbers of Listeners in each Gateway, and we decided not to have TLS config in HTTPRoutes because it's rightly a property of the Listener.

This seems like an unfortunate choice. We have thousands of ingress resources, each which might have their own hostnames and TLS integrations. These can't be directly ported to HTTPRoutes bc of this choice. We generally provide a few ingress controllers/gateways only - making thousands of Gateway's is unsuitable (they're an infra detail with management/money cost). With Ingress it was easy for an app developer to add an Ingress object to express their need for a cert, hostname, and paths to map to their service. If we were to adopt this, we'd have app developers trying to extend the listeners of the Gateway all the time.

A lot of the API Gateway examples are relying heavily on folks using company-wide wildcards

dghubble avatar Oct 25 '24 21:10 dghubble

Started discussion in https://github.com/kubernetes-sigs/gateway-api/discussions/3418

dghubble avatar Oct 28 '24 16:10 dghubble