Webhook --dynamic-serving-dns-names doesn't set SANs
Describe the bug:
I'm trying to use --dynamic-serving-dns-names on the webhook daemon to add subject alternative names to the certificate used by the webhook pod.
e.g. I've tried --dynamic-serving-dns-names=cert-manager-webhook,cert-manager-webhook.cert-manager,cert-manager-webhook.cert-manager.svc,cert-webhook.example.com
However the certficate generated doesn't seem to have any of these dns names.
Checked via e.g. kubectl -n cert-manager get secret cert-manager-webhook-ca -o yaml | yq -r '.data["tls.crt"] | @base64d' | openssl x509 -text -noout which gives e.g.:
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
23:92:50:f2:cd:27:34:01:d1:76:79:ee:7e:3e:81:8e
Signature Algorithm: ecdsa-with-SHA384
Issuer: CN = cert-manager-webhook-ca
Validity
Not Before: May 30 05:26:30 2022 GMT
Not After : May 30 05:26:30 2023 GMT
Subject: CN = cert-manager-webhook-ca
Subject Public Key Info:
Public Key Algorithm: id-ecPublicKey
Public-Key: (384 bit)
pub:
04:50:7b:6f:8a:13:e6:5c:fd:04:bd:65:cb:4b:0f:
f3:10:69:fd:71:90:29:d2:c7:90:b3:d5:fd:f7:91:
8a:85:ed:aa:e5:5d:61:57:27:41:6d:f7:85:8d:86:
c8:9d:17:93:21:75:7a:f7:45:86:b4:10:0d:74:22:
69:e9:e7:6f:4d:1d:ef:da:04:93:72:9e:31:b3:b7:
69:db:14:e8:43:1b:d6:75:18:94:0a:8a:2d:03:64:
87:ff:ec:af:93:0a:89
ASN1 OID: secp384r1
NIST CURVE: P-384
X509v3 extensions:
X509v3 Key Usage: critical
Digital Signature, Key Encipherment, Certificate Sign
X509v3 Basic Constraints: critical
CA:TRUE
X509v3 Subject Key Identifier:
D4:7A:94:A2:E6:8F:EE:1E:6F:FF:82:FA:33:5F:7A:D9:F4:EB:6A:0D
Signature Algorithm: ecdsa-with-SHA384
30:65:02:30:6a:6b:25:3b:00:af:4b:71:94:f6:da:63:3f:5f:
db:ae:9c:41:1a:4d:72:16:52:0e:51:e0:e9:4e:b8:f7:c9:93:
22:14:36:21:f9:0a:f0:a9:04:2b:45:96:74:1a:dd:03:02:31:
00:87:2f:91:51:7d:ae:ba:38:4e:33:b7:02:c7:d8:09:6d:20:
af:9c:1f:ea:89:73:65:ee:71:65:cc:41:7e:b5:60:e3:d4:2c:
18:30:80:e9:e8:c1:ce:64:3f:ac:8a:df:c5
Expected behaviour:
The generated certificate should have subject-alternative-names (SANs) with the dynamic-serving-dns-names provided.
Steps to reproduce the bug: See description above.
Anything else we need to know?:
- I came up with https://github.com/cert-manager/cert-manager/pull/5163 trying to debug this.
- This is likely the cause of several historical
tls: bad certficateerrors in the issue tracker
Environment details::
- Kubernetes version:
Server Version: version.Info{Major:"1", Minor:"21+", GitVersion:"v1.21.9-eks-0d102a7", GitCommit:"eb09fc479c1b2bfcc35c47416efb36f1b9052d58", GitTreeState:"clean", BuildDate:"2022-02-17T16:36:28Z", GoVersion:"go1.16.12", Compiler:"gc", Platform:"linux/amd64"} - Cloud-provider/provisioner: EKS
- cert-manager version: 1.6.1; replicated with 1.8.0
- Install method: e.g. helm which is then
kustomize-d
/kind bug
Hi @james-callahan thanks for opening the issue!
The cert that is stored in the cert-manager-webhook-ca Secret is a self-signed CA cert that will be used to sign the actual webhook's serving cert (which is only stored in memory and should have the DNS names from --dynamic-serving-dns-names flag).
(You should be able to verify that the serving cert is as expected with something like:
kubectl port-forward -ncert-manager svc/cert-manager-webhook 8081:443
openssl s_client -showcerts -connect localhost:8081 2>/dev/null | openssl x509 -text -noout
)
(If you are struggling with webhook configuration on EKS there are some known issues with the networking setup there https://cert-manager.io/docs/concepts/webhook/#webhook-connection-problems-on-aws-eks)
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to jetstack.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to jetstack.
/lifecycle rotten
/remove-lifecycle stale
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to jetstack.
/close
@jetstack-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity. Reopen the issue with
/reopen. Mark the issue as fresh with/remove-lifecycle rotten. Send feedback to jetstack. /close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.