headlamp
headlamp copied to clipboard
OIDC with Keycloak which is also in the cluster?
Hello!
Discussions were disabled, so apologies for posting this as an issue. :)
Most of my company collegues aren't as terminal-savy as me and a former worker had demployed a k3s cluster here. Now I added Headlamp as a nice Web UI to give my collegue an entrypoint into the cluster so they can see what it is doing and the likes.
However, I had wanted to use our existing Keycloak OIDC structure, bound to our AD, to enable seamless SSO. And I can, in fact, click the login button and it "logs me in" - but the browser console tells me that I am unauthenticated.
Granted, I know that it is attempting to authenticate me directly with the API server through OIDC.
Question is, how can I realize that, without having to share a singular service account token around? I shared it with another collegue for now so they can try Headlamp out besides myself, but I would like to integrate it into our existing infrastructure.
Since the container has the service account loaded and a ClusterRoleBinding is established, Headlamp can authenticate with this just fine, in theory.
Is there anything I missed or that I have to do to make it work?
Here is the current deployment, in full:
Full deployment YAML
## Copied and modified from source
## ref https://github.com/headlamp-k8s/headlamp/blob/main/kubernetes-headlamp.yaml
apiVersion: v1
kind: Namespace
metadata:
name: headlamp
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: headlamp
namespace: headlamp
spec:
replicas: 1
selector:
matchLabels:
k8s-app: headlamp
template:
metadata:
labels:
k8s-app: headlamp
spec:
serviceAccountName: headlamp-sva
containers:
- name: headlamp
image: ghcr.io/headlamp-k8s/headlamp:latest
args:
- "-in-cluster"
- "-plugins-dir=/headlamp/plugins"
ports:
- name: http
containerPort: 4466
protocol: TCP
env:
- name: HEADLAMP_CONFIG_OIDC_CLIENT_ID
value: "headlamp"
- name: HEADLAMP_CONFIG_OIDC_CLIENT_SECRET
value: "..."
- name: HEADLAMP_CONFIG_OIDC_IDP_ISSUER_URL
value: "https://keycloak.our.domain/realms/master"
livenessProbe:
httpGet:
scheme: HTTP
path: /
port: 4466
initialDelaySeconds: 30
timeoutSeconds: 30
nodeSelector:
'kubernetes.io/os': linux
---
kind: Secret
apiVersion: v1
metadata:
name: headlamp-admin
namespace: headlamp
annotations:
kubernetes.io/service-account.name: "headlamp-sva"
type: kubernetes.io/service-account-token
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: headlamp-sva
namespace: headlamp
labels:
k8s-app: headlamp
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: headlamp-admin
namespace: headlamp
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: headlamp-sva
namespace: headlamp
---
kind: Service
apiVersion: v1
metadata:
name: headlamp
namespace: headlamp
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 4466
selector:
k8s-app: headlamp
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: headlamp
namespace: headlamp
spec:
entryPoints:
- websecure
routes:
- kind: Rule
match: Host(`headlamp.senpro.it`)
services:
- name: headlamp
port: 80
Thanks and kind regards!
@senpro-ingwersenk Did you follow this docs to setup headlamp with keycloak OIDC? Can you share redact sensitive information and share a screenshot/logs of the error that you see in browser console?
The guide assumes that Keycloak is hosted outside the cluster - which mine is not.
# kubectl get -n keycloak all
NAME READY STATUS RESTARTS AGE
pod/keycloak-9dd979546-rpzp8 3/3 Running 0 2d7h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/keycloak ClusterIP 10.43.212.197 <none> 8080/TCP 271d
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/keycloak 1/1 1 1 271d
NAME DESIRED CURRENT READY AGE
replicaset.apps/keycloak-584bf4c8bf 0 0 0 237d
replicaset.apps/keycloak-5965995bf 0 0 0 84d
replicaset.apps/keycloak-5f65ffd5fb 0 0 0 84d
replicaset.apps/keycloak-6f89f874f 0 0 0 271d
replicaset.apps/keycloak-8694c4785 0 0 0 84d
replicaset.apps/keycloak-9dd979546 1 1 1 2d7h
So I did my best to try and make a configuration and deployment that would get close to this - but logging in via OIDC to Headlamp shows this in my browser console:
When I use the kubectl create token command and use the result of that to log in, it works just fine. But I can not find out the logevity of that token, which is why I would like to just reuse my existing Keycloak - but it is also already inside the cluster, not outside as the guide assumes.
I did try to find similiar options for k3s in particular, but couldn't - but it is likely that I missed it.
@senpro-ingwersenk Hi! Thanks for your efforts to run it. I really wonder why it is important to distinguish between keycloak outside the cluster and inside - I think in both cases you should publish Keycloak outside with Ingress (i.e. Traefik in your case?) and use domain name pointing to ingress. Also please check that the services published with the ingress are accessible from the cluster itself - it could be an issue, particularly when running in clouds like DO.