aws-load-balancer-controller
aws-load-balancer-controller copied to clipboard
ALB controller doesn't play nice with custom certutil issuers?
Describe the bug I had a custom issuer getting weekly origin certs from Cloudflare and then I switched to using this controller to setup ALB's and the ALB controller cannot use the custom-issued certs.
Steps to reproduce
Using a custom Helm Chart Install ALB Controller on the EKS cluster
My ingress.yaml
# Source: palolo-app/templates/ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: palolo-app
labels:
helm.sh/chart: palolo-app-0.1.0
app.kubernetes.io/name: palolo-app
app.kubernetes.io/instance: palolo-app
app.kubernetes.io/version: "0.1.0"
app.kubernetes.io/managed-by: Helm
annotations:
# This should not be required.
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-west-2:796723948741:certificate/85944099-22a1-48bd-b343-f8b1a98c54db
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS": 443}]'
alb.ingress.kubernetes.io/load-balancer-name: palolo-alpha
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: instance
cert-manager.io/issuer: prod-issuer
cert-manager.io/issuer-group: cert-manager.k8s.cloudflare.com
cert-manager.io/issuer-kind: OriginIssuer
namespace: palolo-alpha
spec:
ingressClassName: alb
tls:
- hosts:
- "alpha.palolo.com"
secretName: palolo-com-tls
rules:
- host: "alpha.palolo.com"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: palolo-app
port:
number: 443
When I push this with the cert arn, it works.
When I push it without the cert arn, the logs say:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CreateCertificate 48m cert-manager-ingress-shim Successfully created Certificate "palolo-com-tls"
Warning FailedBuildModel 37m (x18 over 48m) ingress Failed build model due to ingress: palolo-alpha/palolo-app: no certificate found for host: alpha.palolo.com
**Expected outcome**
I want the AWS controller to pick up my k8s secret origin cert issued by Cloudflare on the Cloudflare controller in the palolo-com-tls secret and then use that to run the webserver. And not require an arn.
I double-checked the secret and there is in fact an origin cert in there and in other contexts it does work.
Environment
- AWS Load Balancer controller version - v2.4.1
- cert-manager - v1.8.0
- Kubernetes version - 1.22
- Using EKS (yes/no), if so version? Yes, eks.1
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
aws-load-balancer-controller kube-system 1 2022-04-22 11:16:53.996107 -0400 EDT deployed aws-load-balancer-controller-1.4.1 v2.4.1
cert-manager cert-manager 3 2022-04-21 18:28:24.347837 -0400 EDT deployed cert-manager-v1.8.0 v1.8.0
palolo-app palolo-alpha 67 2022-04-22 18:35:59.678256 -0400 EDT deployed palolo-app-0.1.0 0.1.0
Additional Context: None
Cloudflare is following the instructions from https://github.com/cloudflare/origin-ca-issuer
# Source: palolo-app/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: palolo-sa
labels:
helm.sh/chart: palolo-app-0.1.0
app.kubernetes.io/name: palolo-app
app.kubernetes.io/instance: palolo-app
app.kubernetes.io/version: "0.1.0"
app.kubernetes.io/managed-by: Helm
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::796723948741:role/palolo-eks-sa
imagePullSecrets:
- name: docker-secret
---
# Source: palolo-app/templates/certificate.yaml
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: palolo-com
namespace: palolo-alpha
spec:
# The secret name where cert-manager should store the signed certificate
secretName: palolo-com-tls
dnsNames:
- alpha.palolo.com
- palolo.com
- "*.palolo.com"
# Duration of the certificate
duration: 168h
# Renew a day before the certificate expiration
renewBefore: 24h
# Reference the Origin CA Issuer you created above, which must be in the same namespace.
issuerRef:
group: cert-manager.k8s.cloudflare.com
kind: OriginIssuer
name: prod-issuer
---
# Source: palolo-app/templates/originissuer.yaml
apiVersion: cert-manager.k8s.cloudflare.com/v1
kind: OriginIssuer
metadata:
name: prod-issuer
namespace: palolo-alpha
spec:
requestType: OriginECC
auth:
serviceKeyRef:
name: service-key
key: key
@meyerkev, are you looking for the controller to automatically import the certificate into ACM in this case? The ALB listener certificate has to come from ACM.
Yeah, I figured that out.
Right now as a workaround, I've got a 15-year cert until I can automate that. Is that a requirement of ALB's or is it a limitation of the controller that can be fixed by me? (Eventually when I'm not working nights and weekends)
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.