kube-rbac-proxy
kube-rbac-proxy copied to clipboard
TLS error when upstream is HTTPS
Situation/Reason: The control-plane nodes are bound by CIS v1.9 security requirements. CIS 1.3.7 - controller-manager --bind-address=127.0.0.1. This means components like (controller-manager, etcd, kube-proxy, kube-scheduler) are only accessible from the host. So I want to use kube-rbac-proxy as proxy to scrape metrics for kube-prometheus-stack v62.7.0 for any of the components prome cannot directly access (got this working for etcd so far).
Problem:
Receive error from kube-rbac-proxy when the upstream is HTTPS.
1 log.go:245] http: proxy error: tls: failed to verify certificate: x509: certificate signed by unknown authority
Environment:
- kubeadm=1.31.0 (stacked control-plane)
- containerd=1.7.12
- OS=Ubuntu 24.04.1
- kube-rbac-proxy=0.18.1
Additional Notes: I have attempted various kube-rbac-proxy flag options for the certificate error.
- no flags for any CA or TLS
- pointed the CA to the /etc/kubernetes/pki/ca.crt (what is declared in controller-manager manifest)
- tried using my domain LetsEncrypt certificate
- various combinations of above
Token Validation:
I am able to use the token attached to the alpine-curl (test client) directly on the control-plane with curl and get the expected results. So the token has the correct permissions.
curl -k -s -vv -H "Authorization: Bearer $CMTOKEN" https://127.0.0.1:10257/metrics
output:
# HELP aggregator_discovery_aggregation_count_total [ALPHA] Counter of number of times discovery was aggregated
# TYPE aggregator_discovery_aggregation_count_total counter
aggregator_discovery_aggregation_count_total 0
# HELP apiextensions_apiserver_validation_ratcheting_seconds [ALPHA] Time for comparison of old to new for the purposes of CRDValidationRatcheting during an UPDATE in seconds.
# TYPE apiextensions_apiserver_validation_ratcheting_seconds histogram
apiextensions_apiserver_validation_ratcheting_seconds_bucket{le="1e-05"} 0
apiextensions_apiserver_validation_ratcheting_seconds_bucket{le="4e-05"} 0
apiextensions_apiserver_validation_ratcheting_seconds_bucket{le="0.00016"} 0
apiextensions_apiserver_validation_ratcheting_seconds_bucket{le="0.00064"} 0
apiextensions_apiserver_validation_ratcheting_seconds_bucket{le="0.00256"} 0
apiextensions_apiserver_validation_ratcheting_seconds_bucket{le="0.01024"} 0
apiextensions_apiserver_validation_ratcheting_seconds_bucket{le="0.04096"} 0
apiextensions_apiserver_validation_ratcheting_seconds_bucket{le="0.16384"} 0
apiextensions_apiserver_validation_ratcheting_seconds_bucket{le="0.65536"} 0
apiextensions_apiserver_validation_ratcheting_seconds_bucket{le="2.62144"} 0
apiextensions_apiserver_validation_ratcheting_seconds_bucket{le="+Inf"} 0
apiextensions_apiserver_validation_ratcheting_seconds_sum 0
apiextensions_apiserver_validation_ratcheting_seconds_count 0
# HELP apiserver_audit_event_total [ALPHA] Counter of audit events generated and sent to the audit backend.
# TYPE apiserver_audit_event_total counter
apiserver_audit_event_total 0
# HELP apiserver_audit_requests_rejected_total [ALPHA] Counter of apiserver requests rejected due to an error in audit logging backend.
# TYPE apiserver_audit_requests_rejected_total counter
apiserver_audit_requests_rejected_total 0
# HELP apiserver_cel_compilation_duration_seconds [BETA] CEL compilation time in seconds.
TRUNCATED!!!!
kube-rbac-proxy-svcacct.yaml
#kube-rbac-proxy-svcacct.yaml
#
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: kube-rbac-proxy
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kube-rbac-proxy
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kube-rbac-proxy
subjects:
- kind: ServiceAccount
name: kube-rbac-proxy
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: kube-rbac-proxy
rules:
- apiGroups: ["authentication.k8s.io"]
resources:
- tokenreviews
verbs: ["create"]
- apiGroups: ["authorization.k8s.io"]
resources:
- subjectaccessreviews
verbs: ["create"]
...
kube-rbac-proxy-daemonset-controller-manager.yaml
#kube-rbac-proxy-daemonset-controller-manager.yaml
#
---
apiVersion: v1
kind: Service
metadata:
labels:
app: kube-rbac-proxy-controller-manager
name: kube-rbac-proxy-controller-manager
namespace: kube-system
spec:
ports:
- name: https
port: 50001
targetPort: https
selector:
app: kube-rbac-proxy-controller-manager
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-rbac-proxy-controller-manager
namespace: kube-system
labels:
app: kube-rbac-proxy-controller-manager
spec:
selector:
matchLabels:
app: kube-rbac-proxy-controller-manager
template:
metadata:
labels:
app: kube-rbac-proxy-controller-manager
spec:
securityContext:
runAsUser: 65532
serviceAccountName: kube-rbac-proxy
containers:
- name: kube-rbac-proxy-controller-manager
image: quay.io/brancz/kube-rbac-proxy:v0.18.1
args:
- "--secure-listen-address=0.0.0.0:50001"
- "--upstream=https://127.0.0.1:10257/"
- "--auth-token-audiences=kube-rbac-proxy-controller-manager.kube-system.svc"
- "--client-ca-file=/etc/ssl/certs/abiwot/abiwot-fullchain.pem"
- "--upstream-ca-file=/etc/ssl/certs/ca.crt"
- "--upstream-client-cert-file=/etc/ssl/certs/ca.crt"
- "--upstream-client-key-file=/etc/ssl/certs/ca.key"
- "--tls-cert-file=/etc/ssl/certs/abiwot/abiwot-fullchain.pem"
- "--tls-private-key-file=/etc/ssl/certs/abiwot/abiwot-key.pem"
- "--v=10"
ports:
- containerPort: 50001
name: https
securityContext:
allowPrivilegeEscalation: false
volumeMounts:
- name: cacerts
mountPath: "/etc/ssl/certs"
readOnly: true
- name: abiwot-certs
mountPath: "/etc/ssl/certs/abiwot"
readOnly: true
hostNetwork: true
tolerations:
- key: "node-role.kubernetes.io/control-plane"
operator: "Exists"
effect: "NoSchedule"
nodeSelector:
node-role.kubernetes.io/control-plane: ""
volumes:
- name: cacerts
secret:
secretName: kube-rbac-proxy-cacrt
- name: abiwot-certs
secret:
secretName: kube-rbac-proxy-abiwot
...
alpine-curl.yaml
# metrics-scraper
apiVersion: v1
kind: ServiceAccount
metadata:
name: metrics
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: metrics
rules:
- nonResourceURLs: ["/metrics"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: metrics
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: metrics
subjects:
- kind: ServiceAccount
name: metrics
namespace: kube-system
---
apiVersion: v1
kind: Pod
metadata:
name: metrics-scraper
namespace: kube-system
spec:
nodeName: cdak8ctr001
serviceAccountName: metrics
containers:
- command:
- tail
- -f
- /dev/null
image: alpine/curl
name: metrics-scraper
resources: {}
volumeMounts:
- name: certs
mountPath: "/etc/ssl/certs"
readOnly: true
- name: token-vol
mountPath: "/service-account"
readOnly: true
dnsPolicy: ClusterFirst
restartPolicy: Never
tolerations:
- key: "node-role.kubernetes.io/control-plane"
effect: "NoSchedule"
volumes:
- name: certs
secret:
secretName: kube-rbac-proxy-cacrt
- name: token-vol
projected:
sources:
- serviceAccountToken:
audience: kube-rbac-proxy-controller-manager.kube-system.svc
expirationSeconds: 3600
path: token
LOGS:
kube-rbac-proxy-controller-manager
I0927 12:00:31.388706 1 request.go:1351] Request Body: {"kind":"TokenReview","apiVersion":"authentication.k8s.io/v1","metadata":{"creationTimestamp":null},"spec":{"token":"eyJhbGciOiJSUzI1NiIsImtpZCI6InAzUGxxaFJfWjRjTnlDUGFwcWMyc3g4N2FTa3lVb1BTeWxwYm82T09yb0UifQ.eyJhdWQiOlsia3ViZS1yYmFjLXByb3h5LWNvbnRyb2xsZXItbWFuYWdlci5rdWJlLXN5c3RlbS5zdmMiXSwiZXhwIjoxNzI3NDQwNTM3LCJpYXQiOjE3Mjc0MzY5MzcsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwianRpIjoiMWY4ZTJlN2MtZmE3Yy00ODcwLTk4M2QtMTg1M2NmNTljYzg1Iiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsIm5vZGUiOnsibmFtZSI6ImNkYWs4Y3RyMDAxIiwidWlkIjoiZWEyNjgxY2YtZGYzMy00NDk0LTk3OTktZDgwM2U1MjdiMDRlIn0sInBvZCI6eyJuYW1lIjoibWV0cmljcy1zY3JhcGVyIiwidWlkIjoiOTMwZWZiZDctN2ZhYy00MjUxLTgxMWQtNDU0MWViNzE0ZmM1In0sInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJtZXRyaWNzIiwidWlkIjoiNmVkY2U3OGYtZDc1NC00MmI5LWFiNTgtOWY1MTY3OWY0ODIxIn19LCJuYmYiOjE3Mjc0MzY5MzcsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTptZXRyaWNzIn0.Qmn-xHJRfSX5xFrstyszxj0mZ4S6_cwTxYEXXR6ZIb97_YCcckGlMlSHHsPbpQAdStwaObl1Iq6rCRsWmrcQYJgMALUqBxJ4HfGZvfcDok0f3_6D09W6Zsod5bSvhLPnY1ZlAr59LhxRIpVhvepreZAh8aEzZlaNvWmgBa2FlzOzdoLSleEVAyhwmwLHu5HsXsZpRvfY8yV-9swzf8OK18B0RS8lkvsVIlE2SbaRvdvfePBCtgGZYIvmPwPCPg2tMhG5YQ7l5ZAYLjs7vtPg-FVtN0i6jFs0NX_CKI60cF6fZyb-7nCC0DMuf8Aqw6KhDULiqKbCXlS4EcD2zw3eBg","audiences":["kube-rbac-proxy-controller-manager.kube-system.svc"]},"status":{"user":{}}}
I0927 12:00:31.389329 1 round_trippers.go:466] curl -v -XPOST -H "Authorization: Bearer <masked>" -H "Accept: application/json, */*" -H "Content-Type: application/json" -H "User-Agent: kube-rbac-proxy/v0.0.0 (linux/amd64) kubernetes/$Format" 'https://172.16.0.1:443/apis/authentication.k8s.io/v1/tokenreviews'
I0927 12:00:31.391692 1 round_trippers.go:510] HTTP Trace: Dial to tcp:172.16.0.1:443 succeed
I0927 12:00:31.410608 1 round_trippers.go:553] POST https://172.16.0.1:443/apis/authentication.k8s.io/v1/tokenreviews 201 Created in 21 milliseconds
I0927 12:00:31.410656 1 round_trippers.go:570] HTTP Statistics: DNSLookup 0 ms Dial 0 ms TLSHandshake 14 ms ServerProcessing 3 ms Duration 21 ms
I0927 12:00:31.410682 1 round_trippers.go:577] Response Headers:
I0927 12:00:31.410717 1 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 32a821ce-f770-41eb-8f0b-b8e427aecc38
I0927 12:00:31.410744 1 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 170bc7d3-8f58-4dce-909e-307874534ade
I0927 12:00:31.410764 1 round_trippers.go:580] Content-Length: 2240
I0927 12:00:31.411189 1 round_trippers.go:580] Date: Fri, 27 Sep 2024 12:00:31 GMT
I0927 12:00:31.411212 1 round_trippers.go:580] Audit-Id: fbd23f26-8c8b-4830-8d88-4a0ebc5eee36
I0927 12:00:31.411234 1 round_trippers.go:580] Cache-Control: no-cache, private
I0927 12:00:31.411256 1 round_trippers.go:580] Content-Type: application/json
I0927 12:00:31.412246 1 request.go:1351] Response Body: {"kind":"TokenReview","apiVersion":"authentication.k8s.io/v1","metadata":{"creationTimestamp":null,"managedFields":[{"manager":"kube-rbac-proxy","operation":"Update","apiVersion":"authentication.k8s.io/v1","time":"2024-09-27T12:00:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:spec":{"f:audiences":{},"f:token":{}}}}]},"spec":{"token":"eyJhbGciOiJSUzI1NiIsImtpZCI6InAzUGxxaFJfWjRjTnlDUGFwcWMyc3g4N2FTa3lVb1BTeWxwYm82T09yb0UifQ.eyJhdWQiOlsia3ViZS1yYmFjLXByb3h5LWNvbnRyb2xsZXItbWFuYWdlci5rdWJlLXN5c3RlbS5zdmMiXSwiZXhwIjoxNzI3NDQwNTM3LCJpYXQiOjE3Mjc0MzY5MzcsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwianRpIjoiMWY4ZTJlN2MtZmE3Yy00ODcwLTk4M2QtMTg1M2NmNTljYzg1Iiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsIm5vZGUiOnsibmFtZSI6ImNkYWs4Y3RyMDAxIiwidWlkIjoiZWEyNjgxY2YtZGYzMy00NDk0LTk3OTktZDgwM2U1MjdiMDRlIn0sInBvZCI6eyJuYW1lIjoibWV0cmljcy1zY3JhcGVyIiwidWlkIjoiOTMwZWZiZDctN2ZhYy00MjUxLTgxMWQtNDU0MWViNzE0ZmM1In0sInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJtZXRyaWNzIiwidWlkIjoiNmVkY2U3OGYtZDc1NC00MmI5LWFiNTgtOWY1MTY3OWY0ODIxIn19LCJuYmYiOjE3Mjc0MzY5MzcsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTptZXRyaWNzIn0.Qmn-xHJRfSX5xFrstyszxj0mZ4S6_cwTxYEXXR6ZIb97_YCcckGlMlSHHsPbpQAdStwaObl1Iq6rCRsWmrcQYJgMALUqBxJ4HfGZvfcDok0f3_6D09W6Zsod5bSvhLPnY1ZlAr59LhxRIpVhvepreZAh8aEzZlaNvWmgBa2FlzOzdoLSleEVAyhwmwLHu5HsXsZpRvfY8yV-9swzf8OK18B0RS8lkvsVIlE2SbaRvdvfePBCtgGZYIvmPwPCPg2tMhG5YQ7l5ZAYLjs7vtPg-FVtN0i6jFs0NX_CKI60cF6fZyb-7nCC0DMuf8Aqw6KhDULiqKbCXlS4EcD2zw3eBg","audiences":["kube-rbac-proxy-controller-manager.kube-system.svc"]},"status":{"authenticated":true,"user":{"username":"system:serviceaccount:kube-system:metrics","uid":"6edce78f-d754-42b9-ab58-9f51679f4821","groups":["system:serviceaccounts","system:serviceaccounts:kube-system","system:authenticated"],"extra":{"authentication.kubernetes.io/credential-id":["JTI=1f8e2e7c-fa7c-4870-983d-1853cf59cc85"],"authentication.kubernetes.io/node-name":["cdak8ctr001"],"authentication.kubernetes.io/node-uid":["ea2681cf-df33-4494-9799-d803e527b04e"],"authentication.kubernetes.io/pod-name":["metrics-scraper"],"authentication.kubernetes.io/pod-uid":["930efbd7-7fac-4251-811d-4541eb714fc5"]}},"audiences":["kube-rbac-proxy-controller-manager.kube-system.svc"]}}
I0927 12:00:31.415545 1 request.go:1351] Request Body: {"kind":"SubjectAccessReview","apiVersion":"authorization.k8s.io/v1","metadata":{"creationTimestamp":null},"spec":{"nonResourceAttributes":{"path":"/metrics","verb":"get"},"user":"system:serviceaccount:kube-system:metrics","groups":["system:serviceaccounts","system:serviceaccounts:kube-system","system:authenticated"],"extra":{"authentication.kubernetes.io/credential-id":["JTI=1f8e2e7c-fa7c-4870-983d-1853cf59cc85"],"authentication.kubernetes.io/node-name":["cdak8ctr001"],"authentication.kubernetes.io/node-uid":["ea2681cf-df33-4494-9799-d803e527b04e"],"authentication.kubernetes.io/pod-name":["metrics-scraper"],"authentication.kubernetes.io/pod-uid":["930efbd7-7fac-4251-811d-4541eb714fc5"]},"uid":"6edce78f-d754-42b9-ab58-9f51679f4821"},"status":{"allowed":false}}
I0927 12:00:31.416790 1 round_trippers.go:466] curl -v -XPOST -H "Accept: application/json, */*" -H "Content-Type: application/json" -H "User-Agent: kube-rbac-proxy/v0.0.0 (linux/amd64) kubernetes/$Format" -H "Authorization: Bearer <masked>" 'https://172.16.0.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews'
I0927 12:00:31.420854 1 round_trippers.go:553] POST https://172.16.0.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews 201 Created in 4 milliseconds
I0927 12:00:31.420899 1 round_trippers.go:570] HTTP Statistics: GetConnection 0 ms ServerProcessing 2 ms Duration 4 ms
I0927 12:00:31.420923 1 round_trippers.go:577] Response Headers:
I0927 12:00:31.420975 1 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 32a821ce-f770-41eb-8f0b-b8e427aecc38
I0927 12:00:31.420994 1 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 170bc7d3-8f58-4dce-909e-307874534ade
I0927 12:00:31.421016 1 round_trippers.go:580] Content-Length: 1429
I0927 12:00:31.421035 1 round_trippers.go:580] Date: Fri, 27 Sep 2024 12:00:31 GMT
I0927 12:00:31.421473 1 round_trippers.go:580] Audit-Id: c0373bc7-2760-4265-ae03-ec5625ad0bad
I0927 12:00:31.421712 1 round_trippers.go:580] Cache-Control: no-cache, private
I0927 12:00:31.422280 1 round_trippers.go:580] Content-Type: application/json
I0927 12:00:31.423788 1 request.go:1351] Response Body: {"kind":"SubjectAccessReview","apiVersion":"authorization.k8s.io/v1","metadata":{"creationTimestamp":null,"managedFields":[{"manager":"kube-rbac-proxy","operation":"Update","apiVersion":"authorization.k8s.io/v1","time":"2024-09-27T12:00:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:spec":{"f:extra":{".":{},"f:authentication.kubernetes.io/credential-id":{},"f:authentication.kubernetes.io/node-name":{},"f:authentication.kubernetes.io/node-uid":{},"f:authentication.kubernetes.io/pod-name":{},"f:authentication.kubernetes.io/pod-uid":{}},"f:groups":{},"f:nonResourceAttributes":{".":{},"f:path":{},"f:verb":{}},"f:uid":{},"f:user":{}}}}]},"spec":{"nonResourceAttributes":{"path":"/metrics","verb":"get"},"user":"system:serviceaccount:kube-system:metrics","groups":["system:serviceaccounts","system:serviceaccounts:kube-system","system:authenticated"],"extra":{"authentication.kubernetes.io/credential-id":["JTI=1f8e2e7c-fa7c-4870-983d-1853cf59cc85"],"authentication.kubernetes.io/node-name":["cdak8ctr001"],"authentication.kubernetes.io/node-uid":["ea2681cf-df33-4494-9799-d803e527b04e"],"authentication.kubernetes.io/pod-name":["metrics-scraper"],"authentication.kubernetes.io/pod-uid":["930efbd7-7fac-4251-811d-4541eb714fc5"]},"uid":"6edce78f-d754-42b9-ab58-9f51679f4821"},"status":{"allowed":true,"reason":"RBAC: allowed by ClusterRoleBinding \"metrics\" of ClusterRole \"metrics\" to ServiceAccount \"metrics/kube-system\""}}
I0927 12:00:31.440254 1 log.go:245] http: proxy error: tls: failed to verify certificate: x509: certificate signed by unknown authority
I0927 12:00:33.418297 1 log.go:245] http: proxy error: tls: failed to verify certificate: x509: certificate signed by unknown authority
alpine-curl (test client):
/ # curl -k -s -vvvv -H "Authorization: Bearer `cat /service-account/token`" https://kube-rbac-proxy-controller-manager.kube-system.svc:50001/metrics
* Host kube-rbac-proxy-controller-manager.kube-system.svc:50001 was resolved.
* IPv6: (none)
* IPv4: 172.16.3.253
* Trying 172.16.3.253:50001...
* Connected to kube-rbac-proxy-controller-manager.kube-system.svc (172.16.3.253) port 50001
* ALPN: curl offers h2,http/1.1
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Request CERT (13):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.3 (OUT), TLS handshake, Certificate (11):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_128_GCM_SHA256 / x25519 / id-ecPublicKey
* ALPN: server accepted h2
* Server certificate:
* subject: CN=*.abiwot-lab.com
* start date: Sep 1 10:29:43 2024 GMT
* expire date: Nov 30 10:29:42 2024 GMT
* issuer: C=US; O=Let's Encrypt; CN=E6
* SSL certificate verify result: unable to get local issuer certificate (20), continuing anyway.
* Certificate level 0: Public key type EC/prime256v1 (256/128 Bits/secBits), signed using ecdsa-with-SHA384
* Certificate level 1: Public key type EC/secp384r1 (384/192 Bits/secBits), signed using sha256WithRSAEncryption
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* using HTTP/2
* [HTTP/2] [1] OPENED stream for https://kube-rbac-proxy-controller-manager.kube-system.svc:50001/metrics
* [HTTP/2] [1] [:method: GET]
* [HTTP/2] [1] [:scheme: https]
* [HTTP/2] [1] [:authority: kube-rbac-proxy-controller-manager.kube-system.svc:50001]
* [HTTP/2] [1] [:path: /metrics]
* [HTTP/2] [1] [user-agent: curl/8.9.1]
* [HTTP/2] [1] [accept: */*]
* [HTTP/2] [1] [authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6InAzUGxxaFJfWjRjTnlDUGFwcWMyc3g4N2FTa3lVb1BTeWxwYm82T09yb0UifQ.eyJhdWQiOlsia3ViZS1yYmFjLXByb3h5LWNvbnRyb2xsZXItbWFuYWdlci5rdWJlLXN5c3RlbS5zdmMiXSwiZXhwIjoxNzI3NDQwNTM3LCJpYXQiOjE3Mjc0MzY5MzcsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwianRpIjoiMWY4ZTJlN2MtZmE3Yy00ODcwLTk4M2QtMTg1M2NmNTljYzg1Iiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsIm5vZGUiOnsibmFtZSI6ImNkYWs4Y3RyMDAxIiwidWlkIjoiZWEyNjgxY2YtZGYzMy00NDk0LTk3OTktZDgwM2U1MjdiMDRlIn0sInBvZCI6eyJuYW1lIjoibWV0cmljcy1zY3JhcGVyIiwidWlkIjoiOTMwZWZiZDctN2ZhYy00MjUxLTgxMWQtNDU0MWViNzE0ZmM1In0sInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJtZXRyaWNzIiwidWlkIjoiNmVkY2U3OGYtZDc1NC00MmI5LWFiNTgtOWY1MTY3OWY0ODIxIn19LCJuYmYiOjE3Mjc0MzY5MzcsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTptZXRyaWNzIn0.Qmn-xHJRfSX5xFrstyszxj0mZ4S6_cwTxYEXXR6ZIb97_YCcckGlMlSHHsPbpQAdStwaObl1Iq6rCRsWmrcQYJgMALUqBxJ4HfGZvfcDok0f3_6D09W6Zsod5bSvhLPnY1ZlAr59LhxRIpVhvepreZAh8aEzZlaNvWmgBa2FlzOzdoLSleEVAyhwmwLHu5HsXsZpRvfY8yV-9swzf8OK18B0RS8lkvsVIlE2SbaRvdvfePBCtgGZYIvmPwPCPg2tMhG5YQ7l5ZAYLjs7vtPg-FVtN0i6jFs0NX_CKI60cF6fZyb-7nCC0DMuf8Aqw6KhDULiqKbCXlS4EcD2zw3eBg]
> GET /metrics HTTP/2
> Host: kube-rbac-proxy-controller-manager.kube-system.svc:50001
> User-Agent: curl/8.9.1
> Accept: */*
> Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6InAzUGxxaFJfWjRjTnlDUGFwcWMyc3g4N2FTa3lVb1BTeWxwYm82T09yb0UifQ.eyJhdWQiOlsia3ViZS1yYmFjLXByb3h5LWNvbnRyb2xsZXItbWFuYWdlci5rdWJlLXN5c3RlbS5zdmMiXSwiZXhwIjoxNzI3NDQwNTM3LCJpYXQiOjE3Mjc0MzY5MzcsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwianRpIjoiMWY4ZTJlN2MtZmE3Yy00ODcwLTk4M2QtMTg1M2NmNTljYzg1Iiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsIm5vZGUiOnsibmFtZSI6ImNkYWs4Y3RyMDAxIiwidWlkIjoiZWEyNjgxY2YtZGYzMy00NDk0LTk3OTktZDgwM2U1MjdiMDRlIn0sInBvZCI6eyJuYW1lIjoibWV0cmljcy1zY3JhcGVyIiwidWlkIjoiOTMwZWZiZDctN2ZhYy00MjUxLTgxMWQtNDU0MWViNzE0ZmM1In0sInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJtZXRyaWNzIiwidWlkIjoiNmVkY2U3OGYtZDc1NC00MmI5LWFiNTgtOWY1MTY3OWY0ODIxIn19LCJuYmYiOjE3Mjc0MzY5MzcsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTptZXRyaWNzIn0.Qmn-xHJRfSX5xFrstyszxj0mZ4S6_cwTxYEXXR6ZIb97_YCcckGlMlSHHsPbpQAdStwaObl1Iq6rCRsWmrcQYJgMALUqBxJ4HfGZvfcDok0f3_6D09W6Zsod5bSvhLPnY1ZlAr59LhxRIpVhvepreZAh8aEzZlaNvWmgBa2FlzOzdoLSleEVAyhwmwLHu5HsXsZpRvfY8yV-9swzf8OK18B0RS8lkvsVIlE2SbaRvdvfePBCtgGZYIvmPwPCPg2tMhG5YQ7l5ZAYLjs7vtPg-FVtN0i6jFs0NX_CKI60cF6fZyb-7nCC0DMuf8Aqw6KhDULiqKbCXlS4EcD2zw3eBg
>
* Request completely sent off
< HTTP/2 502
< content-length: 0
< date: Fri, 27 Sep 2024 12:00:34 GMT
<
* Connection #0 to host kube-rbac-proxy-controller-manager.kube-system.svc left intact
forgot to mention. I looked at this previous issue ticket but did not help or I did not fully understand the solution. https://github.com/brancz/kube-rbac-proxy/issues/227
The --upstream-ca-file must be the CA that signed upstream's certificate. This would mean that, e.g. the controller manager's cert must be signed by whatever is in /etc/ssl/certs/ca.crt. Can you verify this with openssl?
It will not match as by default, the controller-manager uses the /etc/kubernetes/pki/ca.crt to create/sign certs. What I was trying to use was my LetsEncrypt cert. So I am testing now with mounting that cert again to verify if that works.
As I dig deeper into this, it seems like the real issue has to do with certs within the controller-manager, kube-scheduler default deployed via kubeadm.
It seems like the cert used for the /metrics on each of these is actually not the /etc/kubernetes/pki.ca.crt. These components create their own internal-CA and cert when certs are not provided during the initialize phase.
https://github.com/kubernetes/kubeadm/issues/2244
https://github.com/prometheus-operator/kube-prometheus/issues/718
So I think my plan to use this as the "middle-man" proxy to scrape metrics from a default kube-prometheus-stack deployment will not really work.
That is quite common to give the metrics endpoint different certificates. It is a kind of "authentication" by cert :)
I will close it. In case that you feel different, feel free to reopen the issue with more information.
@ibihim I'm facing the same issue here. Could you provide a upstream-insecure-skip-verify flag?
If kubeadm is used, the kube-controller-manager and kube-scheduler use on-the-fly certificates generated at startup, which are not exposed. Establishing a trusted connection is not possible in that context.
Readiness and liveness probes for kube-scheduler do not verify TLS certificates. Here’s how it works: kubelet connects locally and skips certificate validation for static pods. It would be great if kube-rbac-proxy could support the same behavior.