aws-load-balancer-controller
aws-load-balancer-controller copied to clipboard
alb provisioned with incorrect redirect rules?
Describe the bug I am using the loadbalancer controller to configure an internal ALB for a service. I am using the following docuemntation to configure the automatic HTTP to HTTPS redirection on the loadbalancer listeners. https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.1/guide/tasks/ssl_redirect/
This does provision the loadbalancer however the rules on the loadbalancer for port 80 listerner and port 443 listener are both set to static response:404
.
Steps to reproduce Create a service and an ingress with the following annotations:
apiVersion: v1
kind: Service
metadata:
name: company-service
namespace: dar
spec:
ports:
- port: 80
targetPort: 8000
protocol: TCP
type: LoadBalancer
selector:
app: company-service
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: conpany-service
namespace: dar
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internal
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:eu-west-1:540369091157:certificate/6a51c08d-<REDACTED>
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
external-dns.alpha.kubernetes.io/hostname: company-service.internal.staging.company.com.
external-dns.alpha.kubernetes.io/ttl: "300"
spec:
rules:
- host: "company-service.internal.staging.company.com"
- http:
paths:
- path: /
backend:
serviceName: ssl-redirect
servicePort: use-annotation
Expected outcome An ALB provisioned with a listener on port 80 which has a rule to redirect to the listener on port 443. And a listener on port 443, with the assigned certificate which forwards traffic to the kubernetes pods behind.
Environment EKS version 1.21.2
- AWS Load Balancer controller version:
amazon/aws-alb-ingress-controller:v2.2.4
- Kubernetes version: 1.21.1
- Using EKS (yes/no), if so version?: yes version 1.21
Additional Context:
Upon checking the loadbalancer directly in the AWS console, you can see the rules created:
Also, when attempting to manually correct the 443 listener rule, so that it forwards to the pods behind, they are all grey and not selectable....

@m477r1x, couple of things
- With v2.2.4 controller, you can configure ssl-redirect via the annotation
alb.ingress.kubernetes.io/ssl-redirect
, for further details, you can refer to the live docs - https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.2/guide/ingress/annotations/#ssl-redirect - I don't see any ingress rule to forward traffic to your k8s service. Either the manifest you put in the issue is not complete, or you don't have the rules. Could you ensure the configuration is complete?
- The changes you make from the AWS console will be overwritten by the controller during next reconcile
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
Any progress on this? If you look at the ingress configuration in k8s, you'll see there's an error, despite the fact that the ssl-redirect annotation is applied:
Rules:
Host Path Backends
---- ---- --------
mywebsite.com
/ ssl-redirect:use-annotation (<error: endpoints "ssl-redirect" not found>)
Annotations: alb.ingress.kubernetes.io/actions.ssl-redirect:
{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}
@crawforde, what is your controller version? did you apply alb.ingress.kubernetes.io/ssl-redirect
annotation?
I am using 1.19. I didn't use the alb.ingress.kubernetes.io/ssl-redirect
annotation because the docs say that it makes changes to other loadbalancers in the group and I don't want ripple effects throughout the whole cluster. However, this SSL behavior was also broken on the previous ingress version.
I have the same annotation configurations as @m477r1x but the port config looks like this because of the updates to ingress v1.19:
- path: /
pathType: Prefix
backend:
service:
name: ssl-redirect
port:
name: use-annotation
/remove-lifecycle rotten
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
@m477r1x, @crawforde, would you be able to share your complete manifest and the model generated by the controller? You can get the model from the controller logs. Also if there are any errors, please share it with us.
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.