Dashboard on GKE | Logged in with auth header but unauthorized
What happened?
-
I am using oauth2-proxy for authenticating Kubernetes Dashboard hosted on GKE cluster (which has IAM configured as the identity provider for cluster authentication)
-
I have deployed Kubernetes Dashboard using helm
helm upgrade --install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard --create-namespace --namespace kubernetes-dashboard -
Below is my ingress and oauth config,
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/configuration-snippet: |
auth_request_set $token $upstream_http_authorization;
add_header Authorization $token;
proxy_set_header Authorization $token;
proxy_pass_header Authorization;
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/auth-url: "http://oauth2-proxy.kube-system.svc.cluster.local:4180/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://$host/oauth2/start?rd=$http_host$request_uri"
name: external-auth-oauth2
namespace: kubernetes-dashboard
spec:
ingressClassName: nginx
rules:
- host: www.k8sdashboard.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: kubernetes-dashboard
port:
number: 443
tls:
- hosts:
- www.k8sdashboard.com
secretName: k8sdashboard-tls
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: oauth2-proxy
name: oauth2-proxy
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
k8s-app: oauth2-proxy
template:
metadata:
labels:
k8s-app: oauth2-proxy
spec:
containers:
- args:
- --provider=google
- --http-address=0.0.0.0:4180
- --upstream=https://kubernetes-dashboard.kubernetes-dashboard.svc.cluster.local:443
- --client-id=xxxxxxxx
- --client-secret=xxxxxxxx
- --cookie-domain=k8sdashboard.com
- --email-domain=xxx.ca
- --whitelist-domain=xxx.ca
- --cookie-refresh=1h
- --cookie-secret=xxxxxxxx
- --redirect-url=https://www.k8sdashboard.com/oauth2/callback
- --set-authorization-header=true
- --set-xauthrequest=true
- --pass-access-token=true
- --pass-authorization-header=true
- --pass-user-headers=true
- --pass-host-header=true
image: quay.io/oauth2-proxy/oauth2-proxy:latest
imagePullPolicy: Always
name: oauth2-proxy
ports:
- containerPort: 4180
protocol: TCP
- Deployed an nginx pod in nginx namespace and created rolebinding for my GKE IAM user
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: nginx-japneet-rolebinding
namespace: nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: admin
subjects:
- kind: User
name: [email protected]
- When I try to access https://www.k8sdashboard.com, I get redirected to Oauth authentication using Google and I am correctly redirected to Kubernetes dashboard. PFB below auth request
Problem : Although I land onto the dashboard but I still get "Unauthorized" error and don't see any pods in nginx namespace although I can verify from kubectl auth command that, I am eligible to get pods as my GKE IAM user.
kubectl auth can-i get pods \
> --namespace=nginx \
> [email protected]
yes
What did you expect to happen?
I should be able to see nginx pods in nginx namespace once logged in.
How can we reproduce it (as minimally and precisely as possible)?
Follow the steps as described in "What happened" section.
Anything else we need to know?
No response
What browsers are you seeing the problem on?
Chrome
Kubernetes Dashboard version
2.7.0
Kubernetes version
Client Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.2", GitCommit:"5835544ca568b757a8ecae5c153f317e5736700e", GitTreeState:"clean", BuildDate:"2022-09-21T14:33:49Z", GoVersion:"go1.19.1", Compiler:"gc", Platform:"darwin/amd64"} Kustomize Version: v4.5.7 Server Version: version.Info{Major:"1", Minor:"27", GitVersion:"v1.27.3-gke.100", GitCommit:"6466b51b762a5c49ae3fb6c2c7233ffe1c96e48c", GitTreeState:"clean", BuildDate:"2023-06-23T09:27:28Z", GoVersion:"go1.20.5 X:boringcrypto", Compiler:"gc", Platform:"linux/amd64"}
Dev environment
No response
@floreks : Any insights here?
@japneet-sahni did you find any resolution for this?
Same error here, hard reset does not help
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
Is your Kubernetes API server configured to accept standard JWT tokens returned by this OAuth flow? kubectl for GKE by default uses their custom plugin to give you access to the GKE cluster. You would need to try something like kubectl --token <BEARER_TOKEN> ... to make sure that Kubernetes API accepts this token. Dashboard only proxies token from Authorization: Bearer <token> header to the Kubernetes API server and acts on behalf of that. We do not do any authorization on our own.
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.