kubelogin
kubelogin copied to clipboard
error: You must be logged in to the server (Unauthorized)
I followed the Keycloak documentation, but cant really seem to make it work. Keycloak is setup as pr. the docs, and when I run below command, it looks like I'm getting the response that I should.
kubectl oidc-login get-token -v1 \
--oidc-issuer-url=https://keycloak-domain.org/auth/realms/kubernetes \
--oidc-client-id=kubernetes \
--oidc-client-secret=secret-goes-here
...
I0927 21:37:02.504991 32273 get_token.go:81] the ID token has the claim: groups=[kubernetes:admin]
I0927 21:37:02.504973 32273 get_token.go:81] the ID token has the claim: aud=kubernetes
I0927 21:37:02.505052 32273 get_token.go:81] the ID token has the claim: iss=https://keycloak-domain.org/auth/realms/kubernetes
I0927 21:37:02.505037 32273 get_token.go:81] the ID token has the claim: sub=uuid-goes-here
...
kube-api is configured.
$ cat /etc/kubernetes/manifests/kube-apiserver.yaml
...
- --oidc-client-id=kubernetes
- --oidc-groups-claim=groups
- --oidc-issuer-url=https://keycloak-domain.org/auth/realms/kubernetes
- --oidc-username-claim=email
...
I applied below to kubernetes.
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: keycloak-admin-group
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: Group
name: kubernetes:admin
And added below to my kubeconfig
file, which I have exported with export KUBECONFIG=./kubeconfig
...
contexts:
- context:
cluster: green-bird-3416
user: keycloak
name: keycloak@green-bird-3416
current-context: keycloak@green-bird-3416
kind: Config
preferences: {}
users:
- name: keycloak
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
command: kubelogin
args:
- get-token
- --oidc-issuer-url=https://keycloak-domain.org/auth/realms/kubernetes
- --oidc-client-id=kubernetes
- --oidc-client-secret=secret-goes-here
It generates a temp file at ~/.kube/cache/oidc-login/d721553ba91f6078f86a5cb2caa2f78eb4d27898b238dfad310b87f01ecdd117
with what looks like correct content.
But when i try and execute kubectl
commands I just get:
$ kubectl get pods
You got a valid token until 2019-09-27 21:50:29 +0200 CEST
error: You must be logged in to the server (Unauthorized)
What am I missing here ?
It seems the kube-apiserver does not accept a token. Would you check the log of kube-apiserver?
# tail the log
kubectl logs -n kube-system --tail=10 -f kube-apiserver-ip-xxxxxxxx
# try API access
kubectl get pods
Some message should appear like:
E1009 09:26:54.912586 1 authentication.go:65] Unable to authenticate the request due to an error: invalid bearer token
@Kerwood what is the kube-apiserver
version ? i have the same problem and i am using 1.16.1
Just redeploy (kubeadm) with 1.15.4, same issue...
I get same issue with 1.14.8 (kops) at first. But I found what's wrong with my settings.
- if you have
--oidc-username-claim=email
in kubeapiserver, you will need add- --oidc-extra-scope=email
in kubelogin args.
my finial working configuration looks like this
kubeAPIServer:
oidcIssuerURL: https://accounts.google.com
oidcClientID: xxx.apps.googleusercontent.com
oidcUsernameClaim: email
users:
- name: google
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- oidc-login
- get-token
- --oidc-issuer-url=https://accounts.google.com
- --oidc-client-id=xxx.apps.googleusercontent.com
- --oidc-client-secret=xxx
- --oidc-extra-scope=email
- --oidc-extra-scope=profile
command: kubectl
@c4po I'm using KOPS 1.14.6 and Google for IDP but this configs doesn't work for me. What's your google settings exactly? Only you created OAuth Client ID or anything? Also, did you rolling-update the cluster after adding kubeAPIServer config?
@hbceylan just google oauth clientid and rolling-update the cluster after the config change.
I am also facing the same issue and here are my commands.
minikube start \
--memory=3000 \
--network-plugin=cni \
--extra-config=apiserver.oidc-issuer-url=https://accounts.google.com \
--extra-config=apiserver.oidc-username-claim=email \
--extra-config=apiserver.oidc-client-id=****.apps.googleusercontent.com
------ kubeconfig
- name: ****@gmail.com
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- oidc-login
- get-token
- --oidc-issuer-url=https://accounts.google.com
- --oidc-client-id=****.apps.googleusercontent.com
- --oidc-client-secret=****
- --oidc-extra-scope=email
command: kubectl
env: null
------- user context created
- context:
cluster: minikube
user: ****@gmail.com
name: kubernetes-local-oidc
-------- user role-binding
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: oidc-cluster-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: User
name: ****@gmail.com
--------- Versions minikube - 1.5.2 kubelogin - 1.15.0 kubectl - 1.16.0 kubernetes - 1.16.0
when I try to list the pods from this user, I get below error :
E1210 05:33:11.849924 1 authentication.go:89] Unable to authenticate the request due to an error: [invalid bearer token, oidc: parse username claims "email": claim not present]
I tried restarting minikube as well, but no luck. However, when I remove 'email' from the commands, it works and logs in as User "https://accounts.google.com#sub".
It seems some bug is hit ! Or may be my fault, configuring kubelogin.
******* Update ******* I tried running below command and got JWT token which I decoded on jwt.io . And surprisingly there is no email and profile details in the response.
kubectl oidc-login get-token \
--oidc-issuer-url=https://accounts.google.com \
--oidc-client-id=****.apps.googleusercontent.com \
--oidc-client-secret=**** \
--oidc-extra-scope=email \
--oidc-extra-scope=profile
@TheRum I've prepared a blog post for this might be helpful
https://medium.com/@hbceylan/deep-dive-kubernetes-single-sign-on-sso-with-openid-connection-via-g-suite-a4f01bd4a48f
Thanks @hbceylan for the article.
But, It doesn't help solving the issue I'm facing.
You can dump the claims of token by passing -v1
option to kubelogin.
#200
You can dump the claims of token by passing
-v1
option to kubelogin.
➜ kubectl --user=oidc get nodes -v1 I0420 22:25:53.002248 67152 shortcut.go:89] Error loading discovery information: Unauthorized error: You must be logged in to the server (Unauthorized)
kubeAPIServer: oidcIssuerURL: https://accounts.google.com oidcClientID: xxx.apps.googleusercontent.com oidcUsernameClaim: e
This fixed our issue.
Hi Everyone! I had the same original issue, i'm using the authentication with the IDP Keycloak. The authentication by the browser is working but i receive the below message (log level 1) from the kubectl --user=oidc get nodes command.
I0923 17:16:30.416277 35800 get_token.go:107] you already have a valid token until 2021-09-23 17:21:28 +0200 CEST I0923 17:16:30.416287 35800 get_token.go:114] writing the token to client-go error: You must be logged in to the server (Unauthorized)
From the Kubernetes API pod, the error is the same as explained by @int128 1 authentication.go:53] Unable to authenticate the request due to an error: invalid bearer token
The result of the kubectl oidc-login setup command is returning the token completed.
My environment :
Kubernetes version 1.19.6 Deployed by Kubespray
-------API CONFIG FILE - --oidc-issuer-url=https://keycloak.localdomain.lan/auth/realms/Kubernetes - --oidc-client-id=kubernetes - --oidc-ca-file=/etc/kubernetes/ssl/localdomain.lan.pem
-------KUBECONFIG FILE
- name: oidc user: exec: apiVersion: client.authentication.k8s.io/v1beta1 args: - oidc-login - get-token - --oidc-issuer-url=https://keycloak.localdomain.lan/auth/realms/Kubernetes - --oidc-client-id=kubernetes - --oidc-client-secret=SECRETID - --insecure-skip-tls-verify - -v1 command: kubectl env: null provideClusterInfo: false
Does someone already find the issue and solved it please ? Thanks in advance for any help!!
I encountered the issue before. We may need to clear the cache especially when you tried many different ways.
What I did was:
rm -rf ~/.kube/cache
rm -rf ~/.kube/http-cache
And then it would initiate a new process once you use the user and eventually worked perfectly fine.
FYI, this is my element configured in ~/.kube/config
:
- name: oidc-cluster-admin
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- oidc-login
- get-token
- --oidc-issuer-url=<REDACTED>
- --oidc-redirect-url-hostname=<REDACTED>
- --oidc-client-id=<REDACTED>
- --oidc-client-secret=<REDACTED>
- --oidc-extra-scope=email
- --certificate-authority=/tmp/ca.pem
command: kubectl
env: null
provideClusterInfo: false
It seems the kube-apiserver does not accept a token. Would you check the log of kube-apiserver?
# tail the log kubectl logs -n kube-system --tail=10 -f kube-apiserver-ip-xxxxxxxx # try API access kubectl get pods
Some message should appear like:
E1009 09:26:54.912586 1 authentication.go:65] Unable to authenticate the request due to an error: invalid bearer token
Hi @int128 I am getting the exact same error message on the kube api server log. And it looks kube-apiserver is unable to accept the token because I could generate the token with
kubectl oidc-login get-token -v1 ^ More? --oidc-issuer-url=https://XXXXXXXXXXXX/auth/realms/master ^ More? --oidc-client-id=kubernetes ^ More? --oidc-client-secret=some secret
Any idea what's the issue here and what can I try next ?