dashboard
dashboard copied to clipboard
Best practices implementing authentication via OIDC
What would you like to be added?
Hi, It's not really a feature request, more like a wish to hear best-practices/recipes-that-work, please feel free to move, if you think there is better type/label for it.
My goal is to provide Dashboard to few different teams, who have different permissions (configured via RBACs) and accessible via OIDC & Keycloak. To achieve that I was using oauth2-proxy, but soon I realized that oauth2-proxy and XHR requests, from Dashboard, don't work nicely together - I can see Dashboard page failing with "CORS: Missing allow origin" error after a while, Origin being "Origin: null", so I can't really allow it @ Keycloak.
How others are achieving it? I think it should be common setup to have Dashboard working with OIDC. I'm ready to replace oauth2-proxy with something else, if required.
Thanks.
Why is this needed?
I think it should be common scenario to use Dashboard with OIDC.
currently, I'm using keycloak-gatekeeper as a sidecar proxy to use the token from OIDC, here is a sample for your reference: https://github.com/kubebb/addon-components/tree/master/kube-dashboard
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
I made a oauth2-proxy compatible adapter that forwards requests to kubernetes-dashboard with the right headers format, see a sample deployment at https://github.com/aslafy-z/k8s-dashboard-impersonation-proxy/tree/main?tab=readme-ov-file#demo.
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten