kong icon indicating copy to clipboard operation
kong copied to clipboard

When using an ALB Ingress in front of a ClusterIP gateway service, the controller doesn't pick the right Load Balancer Address

Open aamattos opened this issue 1 year ago • 3 comments

Is there an existing issue for this?

  • [X] I have searched the existing issues

Kong version ($ kong version)

Kong 3.7

Current Behavior

When using an AWS ALB Ingress in front of a ClusterIP gateway service, the controller doesn't pick the right Load Balancer Address. Instead of using the DNS of the ALB Load Balancer created by the ALB Ingress, It picks the internal IP of the Cluster IP service. As a workaround I had to set (On a second step) the PUBLISH_STATUS_ADDRESS environment variable with the value of the ALB Load Balancer endpoint.

Expected Behavior

When using an Ingress in front of the gateway service (proxy.ingress.enabled = true) the controller should look at the ingress load balancer instead of the service's load balancer

Steps To Reproduce

Enable the proxy ingress and set the proxy type to Cluster IP Add the proper ingress annotations to create an ALB. Wait for Kong to reconcile the ingresses with type kong (or whatever ingress class you defined)

Anything else?

No response

aamattos avatar Oct 29 '24 15:10 aamattos

Can you dump the full status of your load balancer service by kubectl get service <lb-service> -n <service-namespace>? Kong Ingress Controller (KIC) will choose the first ingress IP of loadbalancer (status.loadBalancer.ingress.ip[0]) of the publish service as the IP of ingresses, if there are multiple ingress IPs in the LB service. KIC cannot detect which IP is actually used for traffic if there are many IPs attached to your LB service so it can only use the first one if not specified.

randmonkey avatar Nov 07 '24 06:11 randmonkey

this is the status of my Service:

apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/instance: kong-alb-internal
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: gateway
    app.kubernetes.io/version: "3.6"
    argocd.argoproj.io/instance: kong-prd
    enable-metrics: "true"
    helm.sh/chart: gateway-2.41.1
  name: kong-alb-internal-gateway-proxy
  namespace: kong
  resourceVersion: "3281033579"
  uid: 651b052f-028e-4862-8944-c964f16ad318
spec:
  clusterIP: 172.20.27.136
  clusterIPs:
  - 172.20.27.136
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - name: kong-proxy
    port: 80
    protocol: TCP
    targetPort: 8000
  selector:
    app.kubernetes.io/component: app
    app.kubernetes.io/instance: kong-alb-internal
    app.kubernetes.io/name: gateway
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

The problem is that the KIC is picking the service private IP (172.20.27.136) Instead of using the ALB Ingress address that's pointing to the ClusterIP Service

aamattos avatar Nov 13 '24 10:11 aamattos

This should be the problem of your k8s provider because it did not update the status.loadBalancer of your service. KIC can only read the load balancer IP from the status.loadBalancer field.

randmonkey avatar Jul 08 '25 07:07 randmonkey