gateway-api
gateway-api copied to clipboard
Increase Gateway CRD Infrastructure Annotation Limit
What would you like to be added:
In the experimental branch for Gateway CRD resources, the maximum number of annotations (properties) is set to 8.
https://github.com/kubernetes-sigs/gateway-api/blob/main/config/crd/experimental/gateway.networking.k8s.io_gateways.yaml#L181
This enhancement is requesting the limit be increased in order to accommodate common Cloud use-cases. A higher number of properties (~20) would be ideal.
Why this is needed: Downstream resources (Cloud Load Balancers, for example) often require many annotations to be configured appropriately. For example, in an AWS environment, the responsibility of creating the underlying Network Load Balancer (NLB) or Application Load Balancer (ALB) to fulfill the Gateway object is passed to the AWS Load Balancer Controller. This Controller uses annotations to configure the load balancer properties, such as health checks, security group associations, etc.
Here are some examples of these types of configurations we typically see in our clusters:
Application (L7) Load Balancer Annotations:
alb.ingress.kubernetes.io/actions.myservice-80: '{"forwardConfig":{"targetGroups":[{"serviceName":"service","servicePort":8080,"weight":100}]},"type":"forward"}'
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig":
{ "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
alb.ingress.kubernetes.io/backend-protocol: HTTP
alb.ingress.kubernetes.io/certificate-arn: ${cert_arn}
alb.ingress.kubernetes.io/healthcheck-interval-seconds: "15"
alb.ingress.kubernetes.io/healthcheck-path: /
alb.ingress.kubernetes.io/healthcheck-port: traffic-port
alb.ingress.kubernetes.io/healthcheck-timeout-seconds: "5"
alb.ingress.kubernetes.io/healthy-threshold-count: "2"
alb.ingress.kubernetes.io/ip-address-type: ipv4
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS": 443}]'
alb.ingress.kubernetes.io/load-balancer-name: my-loadbalancer-name
alb.ingress.kubernetes.io/scheme: internal
alb.ingress.kubernetes.io/security-groups: sg-0123456789abcde01
alb.ingress.kubernetes.io/success-codes: "200"
alb.ingress.kubernetes.io/tags: env=dev,type=alb,app=my-app
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/unhealthy-threshold-count: "2"
kubernetes.io/ingress.class: alb
Network (L4) Load Balancer Annotations:
service.beta.kubernetes.io/aws-load-balancer-access-log-enabled: false
service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags:
env=dev,type=alb,app=my-app
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: false
service.beta.kubernetes.io/aws-load-balancer-healthcheck-healthy-threshold: 3
service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval: 10
service.beta.kubernetes.io/aws-load-balancer-healthcheck-path: /
service.beta.kubernetes.io/aws-load-balancer-healthcheck-port: traffic-port
service.beta.kubernetes.io/aws-load-balancer-healthcheck-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-healthcheck-timeout: 10
service.beta.kubernetes.io/aws-load-balancer-healthcheck-unhealthy-threshold: 3
service.beta.kubernetes.io/aws-load-balancer-ip-address-type: ipv4
service.beta.kubernetes.io/aws-load-balancer-name: my-nlb
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
service.beta.kubernetes.io/aws-load-balancer-scheme: internal
service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: preserve_client_ip.enabled=true
service.beta.kubernetes.io/aws-load-balancer-target-node-labels: type=cpu
service.beta.kubernetes.io/aws-load-balancer-type: external
service.beta.kubernetes.io/load-balancer-source-ranges: 172.0.0.0/8, 10.0.0.0/10
AWS Load Balancer Controller Annotation reference: https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.6/guide/ingress/annotations/ https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.6/guide/service/annotations/
I don't mind this in genera,l but a lot of those example annotations ought to be replaced by first class API fields that already exist
I don't mind this in genera,l but a lot of those example annotations ought to be replaced by first class API fields that already exist
Appreciate the feedback.
Typically, we follow the guidelines of the cloud provider and their associated documentation which is often built on a version of Kubernetes that is older than the current release and those API field may not yet be available.
In general, I agree reducing the number of annotations and replacing with API fields is a good practice - however, I still think we're going to need more than 8.
Would you mind providing an example of which annotation(s) are covered by Service or Ingress classes?
Thank you!
backendProtocol: service.ports.appProtocol certificate-arn: gateway.spec.tls actions: HTTPRoute ip-address-type: gateway.spec.address listen-ports: gateway.spec.listenres
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.