dashboard icon indicating copy to clipboard operation
dashboard copied to clipboard

Dashboard on GKE | Logged in with auth header but unauthorized

Open japneet-sahni opened this issue 2 years ago • 5 comments

What happened?

  • I am using oauth2-proxy for authenticating Kubernetes Dashboard hosted on GKE cluster (which has IAM configured as the identity provider for cluster authentication)

  • I have deployed Kubernetes Dashboard using helm helm upgrade --install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard --create-namespace --namespace kubernetes-dashboard

  • Below is my ingress and oauth config,

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/configuration-snippet: |
      auth_request_set $token $upstream_http_authorization;
      add_header Authorization $token;
      proxy_set_header Authorization $token;
      proxy_pass_header Authorization;
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
    nginx.ingress.kubernetes.io/auth-url: "http://oauth2-proxy.kube-system.svc.cluster.local:4180/oauth2/auth"
    nginx.ingress.kubernetes.io/auth-signin: "https://$host/oauth2/start?rd=$http_host$request_uri"
  name: external-auth-oauth2
  namespace: kubernetes-dashboard
spec:
  ingressClassName: nginx
  rules:
  - host: www.k8sdashboard.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: kubernetes-dashboard
            port:
              number: 443
  tls:
  - hosts:
    - www.k8sdashboard.com
    secretName: k8sdashboard-tls

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s-app: oauth2-proxy
  name: oauth2-proxy
  namespace: kube-system
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: oauth2-proxy
  template:
    metadata:
      labels:
        k8s-app: oauth2-proxy
    spec:
      containers:
      - args:
        - --provider=google
        - --http-address=0.0.0.0:4180
        - --upstream=https://kubernetes-dashboard.kubernetes-dashboard.svc.cluster.local:443
        - --client-id=xxxxxxxx
        - --client-secret=xxxxxxxx
        - --cookie-domain=k8sdashboard.com
        - --email-domain=xxx.ca
        - --whitelist-domain=xxx.ca
        - --cookie-refresh=1h
        - --cookie-secret=xxxxxxxx
        - --redirect-url=https://www.k8sdashboard.com/oauth2/callback
        - --set-authorization-header=true
        - --set-xauthrequest=true
        - --pass-access-token=true
        - --pass-authorization-header=true
        - --pass-user-headers=true
        - --pass-host-header=true
        image: quay.io/oauth2-proxy/oauth2-proxy:latest
        imagePullPolicy: Always
        name: oauth2-proxy
        ports:
        - containerPort: 4180
          protocol: TCP
  • Deployed an nginx pod in nginx namespace and created rolebinding for my GKE IAM user
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: nginx-japneet-rolebinding
  namespace: nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: admin
subjects:
- kind: User
  name: [email protected]
  • When I try to access https://www.k8sdashboard.com, I get redirected to Oauth authentication using Google and I am correctly redirected to Kubernetes dashboard. PFB below auth request image

Problem : Although I land onto the dashboard but I still get "Unauthorized" error and don't see any pods in nginx namespace although I can verify from kubectl auth command that, I am eligible to get pods as my GKE IAM user.

kubectl auth can-i get pods \
>     --namespace=nginx \
>     [email protected]
yes
image

What did you expect to happen?

I should be able to see nginx pods in nginx namespace once logged in.

How can we reproduce it (as minimally and precisely as possible)?

Follow the steps as described in "What happened" section.

Anything else we need to know?

No response

What browsers are you seeing the problem on?

Chrome

Kubernetes Dashboard version

2.7.0

Kubernetes version

Client Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.2", GitCommit:"5835544ca568b757a8ecae5c153f317e5736700e", GitTreeState:"clean", BuildDate:"2022-09-21T14:33:49Z", GoVersion:"go1.19.1", Compiler:"gc", Platform:"darwin/amd64"} Kustomize Version: v4.5.7 Server Version: version.Info{Major:"1", Minor:"27", GitVersion:"v1.27.3-gke.100", GitCommit:"6466b51b762a5c49ae3fb6c2c7233ffe1c96e48c", GitTreeState:"clean", BuildDate:"2023-06-23T09:27:28Z", GoVersion:"go1.20.5 X:boringcrypto", Compiler:"gc", Platform:"linux/amd64"}

Dev environment

No response

japneet-sahni avatar Sep 17 '23 23:09 japneet-sahni

@floreks : Any insights here?

japneet-sahni avatar Sep 18 '23 19:09 japneet-sahni

@japneet-sahni did you find any resolution for this?

mecampbellsoup avatar Nov 19 '23 16:11 mecampbellsoup

Same error here, hard reset does not help

MyMindWorld avatar Nov 21 '23 20:11 MyMindWorld

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Feb 19 '24 21:02 k8s-triage-robot

Is your Kubernetes API server configured to accept standard JWT tokens returned by this OAuth flow? kubectl for GKE by default uses their custom plugin to give you access to the GKE cluster. You would need to try something like kubectl --token <BEARER_TOKEN> ... to make sure that Kubernetes API accepts this token. Dashboard only proxies token from Authorization: Bearer <token> header to the Kubernetes API server and acts on behalf of that. We do not do any authorization on our own.

floreks avatar Mar 04 '24 14:03 floreks

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Apr 03 '24 15:04 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar May 03 '24 15:05 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar May 03 '24 15:05 k8s-ci-robot