sealed-secrets
sealed-secrets copied to clipboard
Provide an Ingress snippet to serve cert.pem
Users should seal their secrets with the latest available certificate. But how do they know which that is?
There are a few problems:
-
The certificate has to be authenticated, i.e. users should be able to verify that the certificate is the one they're supposed to use, as opposed to something Mallory tricks them to use.
-
Every month or so a new certificate gets created.
Sealed secrets can operate roughly in two modes:
a. online: grab the latest certificate by connecting to the target cluster (e.g. if you can kubectl proxy, you can use the online method)
b. offline: pass a certificate file/url via --cert.
The online method is a low friction operational mode of sealed-secrets, but the offline mode is what enables some advanced use cases that are quite peculiar to sealed-secrets. It allows users to operate clusters even if they cannot touch them (e.g. pure GitOps without even having to get in the VPN etc).
We don't mandate how people use the offline mode, but that doesn't mean we cannot make it a bit easier to setup a reasonable distribution channel for the public key:
Expose controller's `/v1/cert.pem` internal http endpoint via an Ingress, possibly authenticated with a TLS certificate (e.g. letsencrypt).
We probably cannot just include said Ingress resource in the main controller.yaml, because setting up an ingress requires some user choices, such as picking a domain name, picking the right annotations to choose the desired loadbalancer (internal, external etc) and least not least, the right kind of TLS certificate.
Hence, we should include a new "pubkey-ingress.yaml" file, that users can optionally apply (possibly overriding the missing bits manually or with kustomize).
Or perhaps a full controller-with-ingress.yaml?
CC: @jjo, @dbarranco, @jbianquetti-nami
IMO, most of the users will be using the "online" model, just because it's pretty straightforward. But the really interesting stuff it's at the "offline" model since it allows true decoupling.
I'll not push for one or another model, but providing the pubkey-ingress.yaml as a scaffold seems the easier way to achieve the goal without introducing too much specific stuff depending on the case. I think it's perfectly feasible to explain offline use cases including some kustomize or jsonnet examples.
I think I prefer leaving the controller as it is and adding the pubkey-ingress.yaml file along with some docs. That sounds better to me.
Thanks for creating the issue!
A disadvantage of using an Ingress resource on AWS is that by default it creates a new load balancer, which adds additional cost.
you can install a nginx-ingress controller and share one single AWS load balancer between multiple ingress. See https://github.com/kubernetes/ingress-nginx
Was excited to find this ticket, but can't find the ingest example. I understand not wanting to make it part of the project itself, but maybe someone could post an example here? :)
I want to make it an (optional) part of this project; just didn't have time yet to write the yaml.
It shouldn't be hard, see https://kubernetes.io/docs/concepts/services-networking/ingress/
The folks doing the helm chart have already done it: https://github.com/helm/charts/blob/master/stable/sealed-secrets/values.yaml#L29
Anyone struggling with setting up the helm chart to serve the public key via http(s) and stumbling into this issue while doing research, here's my (Flux-backed, but pretty straightforward) setup combining different workarounds that are currently needed to make this work:
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: sealed-secrets
# Using a different namespace is possible, but requires users of the kubeseal cli tool to pass args/envs
namespace: kube-system
spec:
releaseName: sealed-secrets
chart:
spec:
chart: sealed-secrets
# Hardcode version to get latest, as sealed-secrets naming convention for the chart is incompatible
# with Helm, which considers it a pre-release and won't install versions later than 13.x
# ref: https://github.com/bitnami-labs/sealed-secrets/issues/523#issuecomment-804913116
version: "1.15.0-r3"
sourceRef:
kind: HelmRepository
name: sealed-secrets
interval: 10m
values:
# Do a fullnameOverride because the chart otherwise generates incompatible names for k8s resources
# ref: https://github.com/bitnami-labs/sealed-secrets/issues/571#issue-863495808
fullnameOverride: sealed-secrets-controller
ingress:
enabled: true
annotations:
# Replace with proper annotations for your ingress controller if applicable. In the case of ingress-
# nginx you could also set up HTTP Basic Auth and expose the /v1/rotate endpoint aswell.
# ref: https://kubernetes.github.io/ingress-nginx/examples/auth/basic/
kubernetes.io/ingress.class: nginx
# Rewrite https://dev.domain.com/sealed-secrets/cert.pem to /v1/cert.pem
# This way we can make use of the controller's built-in HTTP server without having to dedicate the
# entire hostname's "/v1"-prefixed set of URIs to sealed-secrets
nginx.ingress.kubernetes.io/rewrite-target: '/v1/$1'
# Set up letsencrypt ACME certs through cert-manager and force TLS (if you want to)
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
# The controller listens on /v1/{cert.pem,rotate,verify}. Capture endpoints you wish to expose in a
# regex group used by the nginx.ingress.kubernetes.io/rewrite-target annotation above.
# ref: https://github.com/bitnami-labs/sealed-secrets/blob/main/cmd/controller/server.go#L35
path: '/sealed-secrets/(verify|cert\.pem)'
hosts:
- dev.domain.com
# TLS config, if applicable
tls:
- secretName: dev-domain-com-tls
hosts:
- dev.domain.com
Verify it works:
curl -s https://dev.domain.com/sealed-secrets/cert.pem
-----BEGIN CERTIFICATE-----
MIIErjCCApagAwIBAgIRAPTEgKbnvpcBLei2j/Qog0owDQYJKoZIhvcNAQELBQAw
ADAeFw0yMTA0MjYwNzI5MzJaFw0zMTA0MjQwNzI5MzJaMAAwggIiMA0GCSqGSIb3
[...]
kBy2znEtmAS9+UiyVYC625f6KZPfIrmvKynf82gpUSJftpmfiCwSZGpq/alxyofN
ou49eoVi7m5Ny/iYT3gNr0dJw7HEhOQ6dtoYW/ODQ+qiFcH+Qvvf6eGpuBPR6Vb0
2+M=
-----END CERTIFICATE-----
You can now have teams get fresh public keys from the web when they're encrypting secrets without having to manually manage offline copies or granting specific RBAC roles (or even cluster access, for that matter).
kubeseal --cert="https://dev.domain.com/sealed-secrets/cert.pem" ...
:rocket:
@mkmik feel free to use this as a base for an example snippet in the docs, I don't have the resources right now to set up a "vanilla" deployment of sealed-secrets according to the README's installation instructions and making sure it works outside of my specific setup.