kube-lego
kube-lego copied to clipboard
What RBAC Permissions to apply
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
nonResourceURLs: []
^That doesn't seem to work ;( Also tried the kube-cert-manager rbac to no avail; still receiving :(
time="2017-03-02T03:11:41Z" level=info msg="kube-lego 0.1.3-d425b293 starting" context=kubelego
time="2017-03-02T03:11:41Z" level=info msg="connected to kubernetes api v1.5.2+coreos.1" context=kubelego
time="2017-03-02T03:11:41Z" level=info msg="server listening on http://:8080/" context=acme
E0302 03:11:41.976226 1 reflector.go:214] github.com/jetstack/kube-lego/pkg/kubelego/watch.go:104: Failed to list *extensions.Ingress: the server does not allow access to the requested resource (get ingresses.extensions)
E0302 03:11:41.976226 1 reflector.go:214] github.com/jetstack/kube-lego/pkg/kubelego/watch.go:104: Failed to list *extensions.Ingress: the server does not allow access to the requested resource (get ingresses.extensions)
log: exiting because of error: log: cannot create log: open /tmp/kube-lego.legotest-kube-lego-963911544-1zz8f.unknownuser.log.ERROR.20170302-031141.1: no such file or directory
Could use some help here on what I'm missing...I will also give back by writing an rbac template and submitting a PR.
EDIT: Here's the full helm yaml I'm using to create auth resources for it:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
name: {{ template "fullname" . }}
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
nonResourceURLs: []
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
name: {{ template "fullname" . }}
subjects:
# The subject is the target service account
- kind: ServiceAccount
name: {{ template "fullname" . }}
roleRef:
# The roleRef specifies the role to give to the
# service account.
kind: Role
name: {{ template "fullname" . }} # Tectonic also provides "readonly", "user", and "admin" cluster roles.
apiGroup: rbac.authorization.k8s.io
---
kind: ServiceAccount
apiVersion: v1
metadata:
name: {{ template "fullname" . }}
Additionally, it's worth noting I'm just using the official chart.
Ok with the help of the tectonic guys, looks like I was able to get this working using an already existing user
cluster role and binding that to the ServiceAccount
I'm making explicitly for kube-lego
.
Here's the yaml:
kind: ServiceAccount
apiVersion: v1
metadata:
name: {{ template "fullname" . }}
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
name: {{ template "fullname" . }}
# subjects holds references to the objects the role applies to.
subjects:
# May be "User", "Group" or "ServiceAccount".
- kind: ServiceAccount
# Preexisting user's email
name: {{ template "fullname" . }}
namespace: {{.Release.Namespace}}
# roleRef contains information about the role being used.
# It can only reference a ClusterRole in the global namespace.
roleRef:
kind: ClusterRole
# name of an existing ClusterRole, either "readonly", "user", "admin",
# or a custom defined role.
name: user
apiGroup: rbac.authorization.k8s.io
I guess this bug should then exist to help us nail down the proper least-privilege permissions kube-lego should get and possibly get an rbac yaml in the repo (both here as an example and in the chart) to help others.
Thanks for reporting this, this is on my list as well. kube-lego needs quite extensive permissions. Which are different in the multiple environments:
- RW global secrets
- Read/Watches global ingresses for nginx
- RW global ingresses/service/endpoints for GCE
I think for a really locked down setup it should run in namespace only mode then the needed authz could be a bit slimer. But this is gonna get important quite soon.
@lachlan-b has made some initial work on creating an RBAC policy here: https://github.com/kubernetes/ingress/issues/575#issuecomment-292781303
Depending on the work of the previous comment:
apiVersion: v1
kind: Namespace
metadata:
name: kube-lego
---
apiVersion: v1
metadata:
name: kube-lego
namespace: kube-lego
data:
# modify this to specify your address
lego.email: "[email protected]"
# configure letencrypt's production api
# lego.url: "https://acme-v01.api.letsencrypt.org/directory"
# configure letencrypt's staging api
lego.url: "https://acme-staging.api.letsencrypt.org/directory"
kind: ConfigMap
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: lego
rules:
- apiGroups:
- ""
- "extensions"
resources:
- configmaps
- secrets
- services
- endpoints
- ingresses
- nodes
- pods
verbs:
- list
- get
- watch
- apiGroups:
- "extensions"
- ""
resources:
- ingresses
- ingresses/status
verbs:
- get
- update
- create
- list
- patch
- delete
- watch
- apiGroups:
- "*"
- ""
resources:
- events
- certificates
- secrets
verbs:
- create
- list
- update
- get
- patch
- watch
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: lego
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: lego
subjects:
- kind: ServiceAccount
name: lego
namespace: kube-lego
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: lego
namespace: kube-lego
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kube-lego
namespace: kube-lego
spec:
replicas: 1
template:
metadata:
labels:
app: kube-lego
spec:
serviceAccountName: lego
containers:
- name: kube-lego
image: jetstack/kube-lego:0.1.3
imagePullPolicy: Always
ports:
- containerPort: 8080
env:
- name: LEGO_EMAIL
valueFrom:
configMapKeyRef:
name: kube-lego
key: lego.email
- name: LEGO_URL
valueFrom:
configMapKeyRef:
name: kube-lego
key: lego.url
- name: LEGO_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: LEGO_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
readinessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 5
timeoutSeconds: 1
I needed to allow create service, i guess because of: "Please be aware that kube-lego creates it's related service on its own"
So building up on the previous post my rbac.yaml
looks like this:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: lego
rules:
- apiGroups:
- ""
- "extensions"
resources:
- configmaps
- secrets
- services
- endpoints
- ingresses
- nodes
- pods
verbs:
- list
- get
- watch
- apiGroups:
- ""
resources:
- services
verbs:
- create
- apiGroups:
- "extensions"
- ""
resources:
- ingresses
- ingresses/status
verbs:
- get
- update
- create
- list
- patch
- delete
- watch
- apiGroups:
- "*"
- ""
resources:
- events
- certificates
- secrets
verbs:
- create
- list
- update
- get
- patch
- watch
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: lego
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: lego
subjects:
- kind: ServiceAccount
name: lego
namespace: kube-lego
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: lego
namespace: kube-lego
I narrowed down the permissions from @webwurst's config above, because the original scope looked slightly insane. This only covers the nginx ingress.
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: kube-lego
rules:
# must be able to get / create / delete services to manage the kube-lego-nginx service
# TODO: this should actually be a namespaced Role, with a distinct name
# More TODO: why does kube-lego even need to manage this? Why can't it be created
# at instantiation time?
- apiGroups:
- ""
resources:
- services
verbs:
- create
- get
- delete
# Allow to do *everything* with ingresses. I can't find any use of ingress/status in the kube-lego
# source code
# TODO: this should be trimmed further, I don't see any use of PATCH and UPDATE insofar
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- get
- update
- create
- list
- patch
- delete
- watch
# allow global access to manage secrets (to write the keys)
- apiGroups:
- ""
resources:
- secrets
verbs:
- get
- create
- update
What I removed:
- overly-wide apiGroups (both
''
and'extensions'
) -
configmaps
,endpoints
,nodes
,pods
: I couldn't find any use of those in the source code (and why would kube-lego need to watch nodes?) - access to
ingress status
es -
certificates
management. Where is this resource even coming from? It's not k8s native -
events
management. Couldn't find any proof kube-lego uses events bus - trimmed down verbs of
secrets
This ClusterRole allows kube-lego to successfully get a certificate. I'm not sure if it can renew successfully yet, though.
This issue needs more attention, more and more k8s clusters are spinned up with RBAC.
I was able to trim out the service parts of the ClusterRole
to just a Role
. The following was able to work for me:
apiVersion: v1
kind: ServiceAccount
metadata:
name: kube-lego-serviceaccount
namespace: kube-lego
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: kube-lego-clusterrole
rules:
# Allow to do *everything* with ingresses. I can't find any use of ingress/status in the kube-lego
# source code
# TODO: this should be trimmed further, I don't see any use of PATCH and UPDATE insofar
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- get
- update
- create
- list
- patch
- delete
- watch
# allow global access to manage secrets (to write the keys)
- apiGroups:
- ""
resources:
- secrets
verbs:
- get
- create
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: kube-lego-role
namespace: kube-lego
rules:
- apiGroups:
- ""
resources:
- services
verbs:
- create
- get
- delete
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: kube-lego-role-binding
namespace: kube-lego
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kube-lego-role
subjects:
- kind: ServiceAccount
name: kube-lego-serviceaccount
namespace: kube-lego
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kube-lego-clusterrole-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kube-lego-clusterrole
subjects:
- kind: ServiceAccount
name: kube-lego-serviceaccount
namespace: kube-lego
The services part can't be in a Role and must be in the ClusterRole, or kube-lego won't be able to create the kube-lego-*
services in each of the namespaces with ingresses.
No authz error is displayed in the logs due to #243
I've just run into this with GCE using one of the options they give for Kubernetes (1.7.5) while testing out my dev stack. Has there been any consensus on which permissions are needed? It would be great to see examples for both nginx and GCE.
This ClusterRole
definition worked for me in a GKE cluster using the GCE load balancer. May even be able to tighten it up a bit more.
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: kube-lego
rules:
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- list
- get
- create
- update
- delete
- watch
- apiGroups:
- ""
resources:
- endpoints
- services
- secrets
verbs:
- get
- create
- update