aws-load-balancer-controller
aws-load-balancer-controller copied to clipboard
listen-ports annotation is mandatory for IngressGroup to work
Hello !
Describe the bug When creating 2 ingress inside a ingressGroup. The first one has all the Exclusive and some Merge annotation to create the ALB and the explicit IngressGroup. The second ingress contains only group.name & group.order annotation.
alb.ingress.kubernetes.io/target-type: instance
alb.ingress.kubernetes.io/group.name: "{{ .Release.Namespace }}-external-ingress-group"
alb.ingress.kubernetes.io/group.order: '2'
The load balancer controller do not create nor TargetGroup nor the rule into the ALB. Nothing is shown into the log. It just output the model as info log.
if you add the listen-ports annotation (already configured in the First Ingress)
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}, {"HTTP":80}]'
The controller create the TG and the rule.
Steps to reproduce
Expected outcome Listen-ports should not be mandatory or the controller should logs an Error that there is no listen-ports specified for the ingress rules.
Environment
- AWS Load Balancer controller version : 2.6.1
- Kubernetes version : 1.27
- Using EKS (yes/no), if so version? no
Additional Context:
@AzaxSyndrom, hi, alb.ingress.kubernetes.io/listen-ports
is merged across ingress group, and we allow users to define different ports per ingress. Most annotations will only apply to the path in that single ingress. Would you be able to share the spec of your second ingress?
Hi,
main ingress :
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: "{{ .Release.Namespace }}-{{ .Values.env }}-external"
namespace: default
labels:
app: "{{ .Release.Namespace }}-{{ .Values.env }}-external"
annotations:
alb.ingress.kubernetes.io/certificate-arn: "{{ .Values.alb.acmArn }}"
alb.ingress.kubernetes.io/healthcheck-path: /
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}, {"HTTP":80}]'
alb.ingress.kubernetes.io/load-balancer-name: "alb-{{ .Release.Namespace }}-{{ .Values.env }}-external"
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/ssl-redirect: '443'
alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-TLS-1-1-2017-01
alb.ingress.kubernetes.io/tags: >-
ingress.k8s.aws/resource=LoadBalancer,
elbv2.k8s.aws/cluster="{{ .Values.clusterName }}"
alb.ingress.kubernetes.io/target-type: instance
alb.ingress.kubernetes.io/group.name: "{{ .Release.Namespace }}-{{ .Values.env }}-external-ingress-group"
spec:
ingressClassName: alb
Second ingress for group :
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: "{{ .Release.Namespace }}-{{ .Chart.Name }}-{{ .Values.env }}-external"
namespace: default
annotations:
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}, {"HTTP":80}]'
alb.ingress.kubernetes.io/healthcheck-path: /uaa/login
alb.ingress.kubernetes.io/target-type: instance
alb.ingress.kubernetes.io/group.name: "{{ .Release.Namespace }}-{{ .Values.env }}-external-ingress-group"
alb.ingress.kubernetes.io/group.order: '20'
spec:
ingressClassName: alb
like this it works. But if i remove the listen-ports from the second ingress, nothing is created into the load balancer. Since that annotation is merge it should use that value by default since it's in the main ingress.
Can confirm behaviour @AzaxSyndrom is describing. Spent several hours debugging why annotation doesn't create the right rules in the load balancer.
This does not create rules:
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/group.name: test-alb
alb.ingress.kubernetes.io/group.order: '10'
This does create them as expected.
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/group.name: test-alb
alb.ingress.kubernetes.io/group.order: '10'
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS": 443}]'
@AzaxSyndrom I'm so happy I found your comment. Massive thanks!
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
Faced the same issue today and wasted so much time, because the controller won't tell that it's not actually going to create the listener rule and target group.
I think its related to having the alb.ingress.kubernetes.io/ssl-redirect: '443'
annotation on your primary ingress, and not adding the certificate-arn
on your second ingress. Because then the controller assumes the listen-port
to be '[{"HTTP": 80}]'
according to the documentation, and tries to add the rule on the HTTP listener, but that's not possible because ssl-redirect is enabled.
/remove-lifecycle rotten
Found three other closed issues related to the same subject: https://github.com/kubernetes-sigs/aws-load-balancer-controller/issues/2471, https://github.com/kubernetes-sigs/aws-load-balancer-controller/issues/2799 and https://github.com/kubernetes-sigs/aws-load-balancer-controller/issues/2932.
I think this necessitates at-least logging a warning on the controller and/or the event API.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten