csi-driver
csi-driver copied to clipboard
ability to specify pod IP in volume attributes
It is possible to specify some specific IPs in volume attributes using csi.cert-manager.io/ip-sans anotation. Would it be possible to use podIP or podIPs from pod status field here to make sure the certificate is issued exactly for IP that is assigned to the pod requesting the certificate?
Hi @nerijusk, Don't quote me on this but I believe it should be possible. Though this isn't tested, I imagine it should look something along the lines of:
- name: tls
protected:
defaultMode: 0400
sources:
- downwardAPI:
items:
- path: "csi.cert-manager.io/ip-sans"
fieldRef:
fieldPath: status.podIP
- csi:
driver: csi.cert-manager.io
volumeAttributes:
csi.cert-manager.io/issuer-name: ca-issuer
Having a play, it doesn't look like it's possible
Having a play, it doesn't look like it's possible
You were much quicker to test as per your previous post. Can such functionality be developed in this CSI driver?
I don't think we can actually. I am not particularity against the driver doing a pod status check in terms of permissions and such, but I don't think it is possible. Unless I'm mistaken, the pod IP will not be provisioned until after the volume is mounted, so the driver can't get the info during the flow.
Unfortunately I too do not know at what time the IP is assigned to pod in regard of volume mount timing. The use case I have is Azure application gateway with ingress controller. In a nutshell, it works by plugging into the same virtual network as AKS and configures backend pools by pod IPs. If I want end to end TLS I need the pods in AKS to have certificates with their IPs. So that application gateway won't bork out when trying to relay the traffic to pod over TLS because of not finding a match for IP in the cert. Anyway, thanks for your help. I guess, I'll need to employ initContainers to have the certs with pod IPs in my case.
What is preventing you here from using a clusterIP service @nerijusk? Just trying to understand the setup
The short answer is, Azure application gateway and its ingress controller together with project requirement to use them. :) Application gateway configures backend pool by IPs of the pods. I'm not an expert on Azure and do not know why it is like that. But it's not configurable.
I've already got the TLS secret generation by cert-manager using initContainer to create certificate resource with pod name and pod IP. CSI driver would just make it a bit easier and I would get rid of additional containers and a few lines of bash in them.
Feel free to close this issue, unless you want more details. :)
Hi @nerijusk , I'm curious about your solution, would it be possible to share some additional details on how you got it to work? (or, even better, the initContainer source or some snippets 🙂 )
Some context: I'm personally using traefik as a reverse proxy with automatic LetsEncrypt certificates for my ingress routes, but having trouble when connecting them to TLS-enabled backends such as kubernetes-dashboard. I have cert-manager configured with an internal self-signed CA (which is already trusted as a rootCA by traefik), but traefik connects to the backend pods by IP addresses directly, so it relies on SAN IP addresses and not on SAN DNS entries. Since it is impossible to know the IP address of a pod before its creation, I was looking into the feasibility of using this CSI Driver as well, but looks like it won't solve the issue...
That left me wondering; how did you manage to make it work with the initContainer, did you have to bind a privileged ServiceAccount to your pod (for creating the Certificate resource, since there seems to be no way to have an exclusive serviceAccountName for initContainers)?
To improve on the initContainer approach, you could consider using the new 'create certificaterequest' command in the recently added CLI tool to automatically create a CertificateRequest with whatever parameters you need: https://github.com/jetstack/cert-manager/pull/2957
On Thu, 9 Jul 2020 at 20:25, Ricardo Sousa [email protected] wrote:
Hi @nerijusk https://github.com/nerijusk , I'm curious about your solution, would it be possible to share some additional details on how you got it to work? (or, even better, the initContainer source or some snippets 🙂 )
Some context: I'm personally using traefik https://docs.traefik.io/ as a reverse proxy with automatic LetsEncrypt certificates for my ingress routes, but having trouble when connecting them to TLS-enabled backends such as kubernetes-dashboard https://github.com/kubernetes/dashboard. I have cert-manager configured with an internal self-signed CA (which is already trusted as a rootCA by traefik), but traefik connects to the backend pods by IP addresses directly, so it relies on SAN IP addresses and not on SAN DNS entries. Since it is impossible to know the IP address of a pod before its creation, I was looking into the feasibility of using this CSI Driver as well, but looks like it won't solve the issue...
That left me wondering; how did you manage to make it work with the initContainer, did you have to bind a privileged ServiceAccount to your pod (for creating the Certificate resource, since there seems to be no way to have an exclusive serviceAccountName for initContainers)?
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/jetstack/cert-manager-csi/issues/17#issuecomment-656308011, or unsubscribe https://github.com/notifications/unsubscribe-auth/AABRWP27VTYJMI3SBE4RNUTR2YKTBANCNFSM4KPYT3DQ .
@Sykkro below is the code snippet from my pipeline that dynamically generates k8s deployment. should give you an idea how cert is issued for each pod in my case:
- name: create-cert
image: ${KUBECTLIMAGE}
command:
- "/bin/bash"
- "-c"
- "cd /tmp; export TMPYAML=\$(mktemp) && . /cert/cert.yaml.sh > \${TMPYAML} && kubectl apply -f \${TMPYAML}"
env:
- name: ENVIRONMENT
value: "${ENVIRONMENT}"
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: APPNAME
value: ${appName}
volumeMounts:
- name: cert-template
mountPath: "/cert"
readOnly: true
- name: tmp-volume
mountPath: "/tmp"
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
privileged: false
runAsUser: 1000
- name: wait-for-cert
image: ${KUBECTLIMAGE}
env:
- name: ENVIRONMENT
value: "${ENVIRONMENT}"
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
command:
- "/bin/bash"
- "-c"
- "kubectl wait --for=condition=ready --timeout=2m --namespace=\${ENVIRONMENT} certificates/\${POD_NAME} && kubectl get secret \${POD_NAME} -n \${ENVIRONMENT} -o jsonpath=\"{.data['tls\\\\.crt']}\" | base64 -d > /pod-cert/tls.crt && kubectl get secret \${POD_NAME} -n \${ENVIRONMENT} -o jsonpath=\"{.data['ca\\\\.crt']}\" | base64 -d > /pod-cert/ca.crt && kubectl get secret \${POD_NAME} -n \${ENVIRONMENT} -o jsonpath=\"{.data['tls\\\\.key']}\" | base64 -d > /pod-cert/tls.key"
volumeMounts:
- name: pod-cert
mountPath: "/pod-cert"
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
privileged: false
runAsUser: 1000
the /cert/cert.yaml.sh file is mounted from configmap, which in-turn is installed with helm together with a bunch of other things when environment is provisioned. here's the relevant part from helm chart:
kind: ConfigMap
metadata:
name: cert-template
namespace: {{ .Release.Namespace }}
data:
cert.yaml.sh: |-
cat <<EOF
apiVersion: cert-manager.io/v1alpha2
kind: Certificate
metadata:
name: ${HOSTNAME}
namespace: {{ .Release.Namespace }}
spec:
secretName: ${HOSTNAME}
duration: 720h
renewBefore: 24h
organization:
- ORG
commonName: ${HOSTNAME}.{{ .Release.Namespace }}.svc.cluster.local
isCA: false
keySize: 2048
keyAlgorithm: rsa
keyEncoding: pkcs1
usages:
- server auth
- client auth
dnsNames:
- ${HOSTNAME}
- ${HOSTNAME}.{{ .Release.Namespace }}
- ${HOSTNAME}.{{ .Release.Namespace }}.svc
- ${HOSTNAME}.{{ .Release.Namespace }}.svc.cluster.local
- ${APPNAME}
ipAddresses:
- ${POD_IP}
issuerRef:
name: ca-issuer
kind: ClusterIssuer
group: cert-manager.io
EOF
Thanks for sharing that @nerijusk (and for the suggestion @munnerz)! 😄
Since we have https://github.com/cert-manager/csi-lib/pull/20 does that mean that this is now possible or is there work that needs to be done in the CSI driver now?