Kubernetes Dashboard does not accept Authorization: Bearer token even when passed via Ingress Reverse Proxy
What happened?
I have using Kubernetes Dashboard v7.13.0 behind an OAuth2 Proxy (via NGINX Ingress and Kong gateway). The OAuth2 proxy correctly authenticates the user and responds with an Authorization: Bearer
Despite successful authentication and the presence of the bearer token in the ingress response (verified via /oauth2/auth returning 202 and including the Authorization header), the dashboard still prompts for manual token input on the login screen.
However we suspect Kong is either stripping or not receiving the Authorization header, as the token is not visible in Kong logs or the requests reaching the dashboard. Even with nginx.kubernetes.io/auth-response-headers and other annotations configured, the dashboard does not detect the token.
dashboard-ingress.yaml annotations:
annotations:
#konghq.com/plugins: "preserve-auth-header"
nginx.ingress.kubernetes.io/auth-url: "https://<host>/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://<host>/oauth2/start?rd=https://$host$request_uri"
nginx.ingress.kubernetes.io/auth-response-headers: >-
X-Auth-Request-Email,X-Auth-Request-Preferred-,X-Auth-Request-Access-Token,
X-Auth-Request-Roles,X-Auth-Request-User,X-Auth-Request-Groups,X-Forwarded-Groups,
Authorization
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/proxy-buffer-size: "256k"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/rewrite-target: "/"
nginx.ingress.kubernetes.io/proxy-buffers: "4 512k"
nginx.ingress.kubernetes.io/proxy-busy-buffers-size: "512k"
oauth2-proxy configs:
--set config.clientID="xxxxxx" \
--set config.clientSecret="xxxxxxx" \
--set config.cookieSecret="xxxxxxxxx" \
--set image.repository="docker.io/oauth2-proxy/oauth2-proxy" \
--set image.tag="v7.9.0" \
--set extraArgs.provider="oidc" \
--set extraArgs.azure-tenant="xxxxxxxxxx" \
--set extraArgs.oidc-issuer-url="https://login.microsoftonline.com/xxxxxxxxxx/v2.0" \
--set extraArgs.redirect-url="https://<host>/oauth2/callback" \
--set extraArgs.scope="openid profile email offline_access" \
--set extraArgs.email-domain="*" \
--set extraArgs.upstream="https://kubernetes-dashboard-kong-proxy.kubernetes-dashboard.svc.cluster.local:443" \
--set extraArgs.whitelist-domain="xxxxxxxx" \
--set extraArgs.cookie-domain=".cloud.xxx.net" \
--set extraArgs.cookie-csrf-expire="59m" \
--set extraArgs.cookie-csrf-per-request=true \
--set extraArgs.cookie-expire="1h" \
--set extraArgs.cookie-refresh="59m" \
--set extraArgs.cookie-secure=true \
--set extraArgs.cookie-name="_oauth2_proxy_csrf" \
--set extraArgs.reverse-proxy=true \
--set extraArgs.set-authorization-header=true \
--set extraArgs.pass-authorization-header=true \
--set extraArgs.pass-access-token=true \
--set extraArgs.show-debug-on-error=true \
--set extraArgs.ssl-upstream-insecure-skip-verify=true \
--set extraArgs.ssl-insecure-skip-verify=true \
--set extraArgs.skip-auth-strip-headers=false \
--set extraArgs.pass-user-headers=true \
--set extraArgs.pass-host-header=true \
--set extraArgs.set-xauthrequest=true \
--set extraArgs.skip-provider-button=true \
--set extraArgs.skip-jwt-bearer-tokens=true \
--set ingress.enabled=true \
--set ingress.className="nginx" \
--set ingress.path="/oauth2" \
--set ingress.hosts[0]="<host>" \
--set ingress.annotations."nginx\.ingress\.kubernetes\.io/proxy-buffer-size"="256k" \
--set ingress.annotations."nginx\.ingress\.kubernetes\.io/proxy-body-size"="5000m"
What did you expect to happen?
- If a valid Authorization: Bearer
header is present on the request reaching the dashboard, it should automatically log in the user - The dashboard should recognize the token and not prompt for manual token entry
- The behavior should be consistent with OAuth2 proxy + reverse proxy flows as suggested in the documentation and community discussions
How can we reproduce it (as minimally and precisely as possible)?
- Deploy Kubernetes Dashboard v7.13.0 using the official Helm chart
- Use OAuth2 Proxy with --reverse-proxy, --set-xauthrequest, --pass-authorization-header, and --pass-access-token
- Use NGINX Ingress with annotations as shown above
- Route ingress backend to the kubernetes dashboard via a Kong proxy service
- Login via OAuth2 proxy works and returns 202 with Authorization: Bearer token
- Dashboard still shows manual token login instead of using the bearer token
- No Authorization header seen in the request headers or logs
Anything else we need to know?
No response
What browsers are you seeing the problem on?
Chrome
Kubernetes Dashboard version
v7.13.0
Kubernetes version
Client Version: v1.30.3 Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3 Server Version: v1.30.4
Dev environment
No response
EDIT: Upon enabling the verbose on the dashboard components, I see that the bearer token is reaching the dashboard but dashboard is rejecting the token and hence resulting in 401 Unauthorized. OIDC is enabled on our aks cluster and api-server is reachable. What is that we are missing here? Please shed some inputs!
--set extraArgs.scope="openid 6dae42f8-4368-4678-94ff-3960e28e3630/.default" \
This will send X-Auth-Request-Access-Token header to the dashboard with correct JWT. However, I still haven't figured out how to rename the header from X-Auth-Request-Access-Token to Authorization. You cannot take Authorization header straight from the oauth2 proxy as it has the wrong scope. Perhaps use auth-snippet annotation on nginx ingress?
@oskarm93 - Thanks for responding. We have this in our updated config and also added to the App Registration API Permissions as shown below, but still we are running into 401 Unauthorized issues. We enabled verbose on all the dashboard components and noticed that the bearer token is reaching the dashboard but k8s api server is rejecting the token.
NOTE: When decoding the bearer token, we noticed that aud in the token is the app registration client id instead of the AKS AAD Server appId. Is that how it works or aud should match AKS AAD server appId?
Updated Oauth2-proxy config:
--set config.clientID="xxxxxx" \
--set config.clientSecret="xxxxxxx" \
--set config.cookieSecret="xxxxxxxxx" \
--set image.repository="docker.io/oauth2-proxy/oauth2-proxy" \
--set image.tag="v7.9.0" \
--set extraArgs.provider="oidc" \
--set extraArgs.azure-tenant="azureTenantId" \
--set extraArgs.oidc-issuer-url="https://login.microsoftonline.com/azureTenantId/v2.0" \
--set extraArgs.oidc-jwks-url="https://login.microsoftonline.com/azureTenantId/discovery/v2.0/keys" \
--set extraArgs.oidc-email-claim="email" \
--set extraArgs.oidc-groups-claim="groups" \
--set extraArgs.login-url="https://login.microsoftonline.com/azureTenantId/oauth2/v2.0/authorize" \
--set extraArgs.redeem-url="https://login.microsoftonline.com/azureTenantId/oauth2/v2.0/token" \
--set extraArgs.redirect-url="https://<host>/oauth2/callback" \
--set extraArgs.skip-oidc-discovery=true \
--set extraArgs.scope="openid profile email offline_access 6dae42f8-4368-4678-94ff-3960e28e3630/.default" \
--set extraArgs.email-domain="*" \
--set extraArgs.upstream="https://kubernetes-dashboard-kong-proxy.kubernetes-dashboard.svc.cluster.local:443" \
--set extraArgs.whitelist-domain="xxxxxxxx" \
--set extraArgs.cookie-domain=".cloud.xxx.net" \
--set extraArgs.cookie-csrf-expire="59m" \
--set extraArgs.cookie-csrf-per-request=true \
--set extraArgs.cookie-expire="1h" \
--set extraArgs.cookie-refresh="59m" \
--set extraArgs.cookie-secure=true \
--set extraArgs.cookie-name="_oauth2_proxy_csrf" \
--set extraArgs.reverse-proxy=true \
--set extraArgs.set-authorization-header=true \
--set extraArgs.pass-authorization-header=true \
--set extraArgs.pass-access-token=true \
--set extraArgs.show-debug-on-error=true \
--set extraArgs.ssl-upstream-insecure-skip-verify=true \
--set extraArgs.ssl-insecure-skip-verify=true \
--set extraArgs.skip-auth-strip-headers=false \
--set extraArgs.pass-user-headers=true \
--set extraArgs.pass-host-header=true \
--set extraArgs.set-xauthrequest=true \
--set extraArgs.skip-provider-button=true \
--set extraArgs.skip-jwt-bearer-tokens=true \
--set ingress.enabled=true \
--set ingress.className="nginx" \
--set ingress.path="/oauth2" \
--set ingress.hosts[0]="<host>" \
--set ingress.annotations."nginx\.ingress\.kubernetes\.io/proxy-buffer-size"="256k" \
--set ingress.annotations."nginx\.ingress\.kubernetes\.io/proxy-body-size"="5000m"
Kubernetes Dashboard Ingress:
annotations:
#konghq.com/plugins: "preserve-auth-header"
nginx.ingress.kubernetes.io/auth-url: "https://<host>/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://<host>/oauth2/start?rd=https://$host$request_uri"
nginx.ingress.kubernetes.io/auth-response-headers: >-
X-Auth-Request-Email,X-Auth-Request-Preferred-,X-Auth-Request-Access-Token,
X-Auth-Request-Roles,X-Auth-Request-User,X-Auth-Request-Groups,X-Forwarded-Groups,
Authorization
nginx.ingress.kubernetes.io/auth-snippet: |
proxy_set_header Authorization "Bearer $http_x_auth_request_access_token";
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/proxy-buffer-size: "256k"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/rewrite-target: "/"
nginx.ingress.kubernetes.io/proxy-buffers: "4 512k"
nginx.ingress.kubernetes.io/proxy-busy-buffers-size: "512k"
Dashboard Auth Error:
I0702 13:16:22.605419 1 main.go:34] "Starting Kubernetes Dashboard Auth" version="1.3.0"
I0702 13:16:22.605502 1 init.go:48] Using in-cluster config
I0702 13:16:22.605665 1 main.go:43] "Listening and serving insecurely on" address="0.0.0.0:8000"
I0702 13:17:10.757040 1 auth.go:38] "Bearer token" size=2026
I0702 13:17:10.757539 1 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I0702 13:17:10.757556 1 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I0702 13:17:10.757561 1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I0702 13:17:10.757566 1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I0702 13:17:10.758451 1 discovery_client.go:658] "Request Body" body=""
I0702 13:17:10.758554 1 round_trippers.go:473] curl -v -XGET -H "Accept: application/json, */*" -H "User-Agent: dashboard-auth/v0.0.0 (linux/amd64) kubernetes/$Format" -H "Authorization: Bearer <masked>" 'https://xxx:xxx:xx:xx/version'
I0702 13:17:10.782797 1 round_trippers.go:517] HTTP Trace: Dial to tcp:xxx:xxx:xx:xx:443 succeed
I0702 13:17:10.797939 1 round_trippers.go:560] GET https://xxx:xxx:xx:xx:443/version 401 Unauthorized in 39 milliseconds
I0702 13:17:10.797983 1 round_trippers.go:577] HTTP Statistics: DNSLookup 0 ms Dial 24 ms TLSHandshake 8 ms ServerProcessing 6 ms Duration 39 ms
I0702 13:17:10.797991 1 round_trippers.go:584] Response Headers:
I0702 13:17:10.797998 1 round_trippers.go:587] Date: Wed, 02 Jul 2025 13:17:10 GMT
I0702 13:17:10.798003 1 round_trippers.go:587] Audit-Id: audit-id
I0702 13:17:10.798007 1 round_trippers.go:587] Cache-Control: no-cache, private
I0702 13:17:10.798011 1 round_trippers.go:587] Content-Type: application/json
I0702 13:17:10.798014 1 round_trippers.go:587] Content-Length: 129
I0702 13:17:10.798067 1 discovery_client.go:658] "Response Body" body=<
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Unauthorized","reason":"Unauthorized","code":401}
>
E0702 13:17:10.798115 1 handler.go:33] "Could not get user" err="MSG_LOGIN_UNAUTHORIZED_ERROR"
Probably related: https://github.com/oauth2-proxy/manifests/issues/348
Probably related: https://github.com/kubernetes/dashboard/issues/10103
I have similar issue integrating oauth2-proxy with kubernetes dashboard on AKS and EntraId so that users can login to the dashboard after authenticating on EntraId. Ouath2-proxy sits between the dashboard and EntraId. I followed the oauth2-proxy documentation, I have these working:
- The dashboard redirect to microsoft login page.
- The authentication with EntraId works fine.
- The redirect after EntraId to the dashboard works. The problem is that dashboard is still requesting for a bearer token. I have inspected the header request of the oauth-proxy using developer tools on the browser and the bearer token is generated. I have gone through various blog post on this integration but no configuration mix, seem to solve my problem.
AKS is ADD enabled as RBAC is managed through EntraId Dashbaord version 7.7.0 and Oauth2-proxy version 7.11.2 are installed with helm
dashboard-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
namespace: kubernetes-dashboard
name: kubernetes-dashboard-ingress
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "https"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/auth-url: "$host/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "$host/oauth2/start?rd=$scheme://$host$escaped_request_uri"
nginx.ingress.kubernetes.io/auth-response-headers: "X-Auth-Request-Access-Token,X-Auth-Request-Email,X-Auth-Request-Groups,Authorization"
nginx.ingress.kubernetes.io/proxy-buffer-size: "256k"
nginx.ingress.kubernetes.io/proxy-buffering: "on"
nginx.ingress.kubernetes.io/proxy-buffers-number: "4"
spec:
ingressClassName: nginx
tls:
- hosts:
- dashboard.internal.com
rules:
- host: dashboard.internal.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: kubernetes-dashboard-kong-proxy
port:
number: 443
oauth2-proxy-values.yaml
config:
clientID: "__CLIENT_ID__"
clientSecret: "__CLIENT_SECRET__"
cookieSecret: "__COOKIE_SECRET__"
configFile: |-
provider="oidc"
client_id="__CLIENT_ID__"
client_secret="__CLIENT_SECRET__"
oidc_issuer_url="https://login.microsoftonline.com/__TENANT_ID__/v2.0"
redirect_url="https://oauthproxy.internal.com/oauth2/callback"
login_url="https://login.microsoftonline.com/__TENANT_ID__/oauth2/v2.0/authorize"
scope="openid email profile"
set_xauthrequest=true
set_authorization_header=true
pass_access_token=true
pass_authorization_header=true
auth_logging=true
standard_logging=true
request_logging=true
silence_ping_logging=false
cookie_domains=".internal.com"
whitelist_domains=".internal.com"
email_domains=["*"]
skip_jwt_bearer_tokens=true
extra_jwt_issuers=[
"issuer=https://sts.windows.net/__TENANT_ID__/, audience=api://AzureADTokenExchange"
]
redis_password="__REDIS_PASSWORD__"
redis_connection_url="redis://oauth2-proxy-redis-master.kubernetes-dashboard.svc.aks-newprj-dev-001.privatelink.westeurope.azmk8s.io:6379
ingress:
enabled: true
className: "nginx"
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/proxy-buffer-size: "256k"
nginx.ingress.kubernetes.io/proxy-buffering: "on"
nginx.ingress.kubernetes.io/proxy-buffers-number: "4"
hosts:
- oauthproxy.internal.com
tls:
- hosts:
- oauthproxy.internal.com
sessionStorage:
type: redis
LOGS oauth-proxy pod
10.244.76.16:41844 - e2650078177d4a9ec92eb285311cb11d - **user_email** [2025/08/26 08:24:17] [AuthSuccess] Authenticated via OAuth2: Session**user_email** user:MPxIywoA6UpuHBgw8u1Om_1vsv9G79N-zG0L_8Vp7Lc PreferredUsername:**user_email** token:true id_token:true created:2025-08-26 08:24:17.773791602 +0000 UTC m=+7713.631325392 expires:2025-08-26 09:30:07.618384776 +0000 UTC m=+11663.475918566 groups:[.......] 10.244.76.16:41844 - e2650078177d4a9ec92eb285311cb11d - - [2025/08/26 08:24:17] oauthproxy.internal..com GET - "/oauth2/callback?code=***retracted cookie**not bearer token***&session_state=007e1189-077a-0159-9eab-d3bd60f105f0" HTTP/1.1 "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/138.0.0.0 Safari/537.36" 302 24 0.585 10.244.76.16:41844 - 832190f8f8d4521532639366d29c7dce -**user email** [2025/08/26 08:24:18] oauthproxy.internal..com GET - "/" HTTP/1.1 "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/138.0.0.0 Safari/537.36" 404 19 0.000 10.244.76.100:60908 - f50c7ca8-b382-4b14-b0e7-b833eb4c86d3 - - [2025/08/26 08:24:18] 10.244.76.12:4180 GET - "/ping" HTTP/1.1 "kube-probe/1.30" 200 2 0.000 10.244.76.100:60918 - 402004fa-c457-4755-9d3f-72c725d11b75 - - [2025/08/26 08:24:18] 10.244.76.12:4180 GET - "/ready" HTTP/1.1 "kube-probe/1.30" 200 2 0.000 10.244.76.11:36512 - 0fe467bdc0ac96e6215a48c87fe6d242 - - [2025/08/26 08:24:25] oauthproxy.internal.com GET - "/oauth2/auth" HTTP/1.1 "" 401 13 0.000
kubernetes-dashboard-auth pod
I0826 09:54:07.3555271auth.go:38] "Bearer token" size=4786
E0826 09:54:07.4511531 handler.go:33] "Could not get user" err="MSG_LOGIN_UNAUTHORIZED_ERROR"
[GIN] 2025/08/26 - 09:54:07 | 401 |95.896263ms | 10.244.76.32 | GET"/api/v1/me"
Every variable is correctly substituted and the deployments are running. I examined the access-token generated by the oauth-proxy and being passed to dashboard, it works with the "kubectl --token=access-token get pods" however, If I paste this on the dashboard login page, it return Error 401. I would like to know how the dashboard handles its authentication. I decoded and compared the access-token from oauth-proxy and service account token. I can see that the audience are different. I expect that a token that works with kubectl, should work with the dashboard since the api server recognizes it.
Stuck here with the same problem ... how can just a simple thing be so complicated and broken for such a long time ?
Hi! This is the solution that worked for me (I tested it with oauth dex), maybe it helps anyone.
- Install kubernetes-dashboard version 7.14.0 from the default installation using helm in the namespace
system-services(i.e., I didn't make any changes to values.yaml and didn't include ingress in values.yaml). - Next, I applied the manifests that create the oauth-proxy deployment, secret, service, and ingress for kubernetes-dashboard oauth-proxy (but don't forget to change the variables oidc-id, oidc-secret, cookie-secret, DASHBOARD.DOMAIN.PRINT.HERE, DEX.DOMAIN.PRINT.HERE, and ssl-cert-chain-tls to your own values):
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-oidc-creds
namespace: system-services
type: Opaque
stringData:
oidc-id: "dex-k8s-auth"
oidc-secret: "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
cookie-secret: "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: kubernetes-dashboard-oauth2-proxy
name: kubernetes-dashboard-oauth2-proxy
namespace: system-services
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard-oauth2-proxy
template:
metadata:
labels:
k8s-app: kubernetes-dashboard-oauth2-proxy
spec:
securityContext:
seccompProfile:
type: RuntimeDefault
containers:
- name: oauth2-proxy
image: quay.io/oauth2-proxy/oauth2-proxy:latest
imagePullPolicy: IfNotPresent
args:
- --provider=oidc
- --provider-display-name="Dex SSO"
- --scope=openid profile email groups
- --redirect-url=https://DASHBOARD.DOMAIN.PRINT.HERE/oauth2/callback
- --oidc-issuer-url=https://DEX.DOMAIN.PRINT.HERE
- --oidc-email-claim=email
- --oidc-groups-claim=groups
- --cookie-secure=false
- --email-domain=*
- --upstream=https://kubernetes-dashboard-kong-proxy:443
- --http-address=0.0.0.0:4180
- --pass-access-token=true
- --pass-authorization-header=true
- --pass-user-headers
- --ssl-upstream-insecure-skip-verify=true
env:
- name: OAUTH2_PROXY_CLIENT_ID
valueFrom:
secretKeyRef:
name: kubernetes-dashboard-oidc-creds
key: oidc-id
- name: OAUTH2_PROXY_CLIENT_SECRET
valueFrom:
secretKeyRef:
name: kubernetes-dashboard-oidc-creds
key: oidc-secret
- name: OAUTH2_PROXY_COOKIE_SECRET
valueFrom:
secretKeyRef:
name: kubernetes-dashboard-oidc-creds
key: cookie-secret
ports:
- containerPort: 4180
protocol: TCP
serviceAccountName: kubernetes-dashboard-api
nodeSelector:
"kubernetes.io/os": linux
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard-oauth2-proxy
name: kubernetes-dashboard-oauth2-proxy
namespace: system-services
spec:
ports:
- name: kubernetes-dashboard-oidc-port
port: 4180
targetPort: 4180
selector:
k8s-app: kubernetes-dashboard-oauth2-proxy
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: kubernetes-dashboard-oauth2-proxy
namespace: system-services
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/proxy-body-size: 50m
nginx.ingress.kubernetes.io/ssl-redirect: 'true'
nginx.ingress.kubernetes.io/use-http2: 'true'
nginx.ingress.kubernetes.io/proxy-buffer-size: "16k"
spec:
tls:
- hosts:
- https://DASHBOARD.DOMAIN.PRINT.HERE
secretName: ssl-cert-chain-tls
rules:
- host: https://DASHBOARD.DOMAIN.PRINT.HERE
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: kubernetes-dashboard-oauth2-proxy
port:
number: 4180
Then I create RoleBinding for user, authorized via oauth:
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: test-user-rbinding-view
namespace: monitoring
subjects:
- kind: User
apiGroup: rbac.authorization.k8s.io
name: [email protected]
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: view
After applying this RoleBinding, the user can log into the kubernetes-dashboard without token, logging in via oidc.
@SergeSpinoza thanks for the update.
I'm a little ahead and using Gateway API and HTTP route (backed by istio gateway implementation) but want to stay agnostic as possible without using istio CRDS.
As your ingress code above doesn't show some crazy annotations shanighans it may work with http route too. I'll test ASAP and come back with a comment.