cert-manager
cert-manager copied to clipboard
Custom labels/annotations in ACME solver services created by Issuer/ClusterIssuer
Is your feature request related to a problem?
We are using micro-segmentation in our cloud environnements. Our micro-segmentation solution requires the ACME solver service to be labeled/annotated in order to access the ACME solver pod and validate the ACME HTTP-01 challenge.
Describe the solution you'd like
Support custom labels and annotations in services that are created by the Issuer/ClusterIssuer.
One solution is to add a serviceTemplate like existing podTemplate and ingressTemplate:
apiVersion: cert-manager.io/v1
kind: Issuer
[...]
spec:
acme:
[...]
solvers:
- http01:
ingress:
[...]
serviceTemplate:
metadata:
labels:
label_1_key: label_1_value
label_2_key: label_2_value
annotations:
annotation_1_key: annotation_1_value
annotation_2_key: annotation_2_value
Describe alternatives you've considered
NA
Additional context
NA
/kind feature
I'm also facing this issue and hope a solution exists
Up...
I am using cert-manager in an auto-scaled Kubernetes cluster in Google Cloud. The cluster is not able to scale down underutilised nodes, because the came solver pods are not backed by a controller (see https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-types-of-pods-can-prevent-ca-from-removing-a-node for details). Adding the label 'cluster-autoscaler.kubernetes.io/safe-to-evict': 'true'
is supposed to solve this issue, but I can't use it since adding labels is not supported.
So one more vote from my side for adding this feature :)

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to jetstack.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to jetstack.
/lifecycle rotten
/remove-lifecycle stale
/remove-lifecycle rotten
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to jetstack.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to jetstack.
/lifecycle rotten
/remove-lifecycle stale
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to jetstack.
/close
@jetstack-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity. Reopen the issue with
/reopen
. Mark the issue as fresh with/remove-lifecycle rotten
. Send feedback to jetstack. /close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.