Remove created secrets permissions if the dashboard is in another namespace
What happened?
If you deploy the kubernetes dashboard in another namespace (like dashboard namespace)
It's showing this error:
- panic: secrets is forbidden: User "system:serviceaccount:dashboard:kubernetes-sa" cannot create resource "secrets" in API group "" in the namespace "dashboard"
If I give creation permissions in this namespace is working fine but after that the dashboard will no create any secret
What did you expect to happen?
Deploy dashboard in another namespace
How can we reproduce it (as minimally and precisely as possible)?
Deploy https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.0/aio/deploy/recommended.yaml in another namespace
Anything else we need to know?
No response
What browsers are you seeing the problem on?
No response
Kubernetes Dashboard version
2.6.0
Kubernetes version
1.24.1
Go version
No response
Node.js version
No response
When we give the create role secrets for the service account automatically the pod will create this secret kubernetes-dashboard-key-holder
I'm working on this topic
If you want to deploy Dashboard in a different namespace you should:
- Download yaml file and change
kubernetes-dashboardnamespace to the one of your choice - Update --namespace argument
It should work after that.
Hi @floreks, Even if we changed the namespace also the secrets are hardcored in the go code. I will work on give this new feature into the code.
I don't think so. We have updated the logic quite a long time ago to use namespace from the dashboard binary --namespace argument. All our dashboard exclusive resources are using it. Only the resource names are hardcoded, not the namespace itself.
panic: secrets is forbidden: User "system:serviceaccount:dashboard:kubernetes-sa" cannot create resource "secrets" in API group "" in the namespace "dashboard"
This error also does not seem right. I think your Dashboard is working, but you misunderstood how it works. You have to create a user account with proper privileges on your own since Dashboard itself does not have any privileges. We only act as a proxy between user and API server and do not manage the privileges.
Hi @floreks, The kubernetes dashboard needs update, watch and get permissions. Not create because the rbac are hardcording the secrets name (thing that is good to minimize the security risks) so we need to change the yaml file to give create secrets permissions without hardcording the secrets as a resourceNames. Because the application dashboard can't hardcore the secret in the go code. I want to give the possibility to change the secrets name and resources to give more possibilities for users. I created a helm chart with dashboard, oauth and dex but the secrets on the dashboard are in the code. If you want to update the helm chart and change the name of the secrets the dashboard will fail
The lack of create secrets permissions for Dashboard is intentional and we don't want to change it. That's why secret is created via yaml and should not be deleted. We can only agree to fully scoped permissions with resourceName provided. It's done by design to increase security.
We also don't really want to allow name changes of resources right now. It would require a complete overhaul of our config system. This is a big change and as mentioned in the README we do have a soft code freeze now.
@floreks Yeah I understood the point but can we make the possibility to change the secrets name? That's my main point of this topic.
Yeah I understood the point but can we make the possibility to change the secrets name? That's my main point of this topic.
You might have missed this part as I have edited my message:
We also don't really want to allow name changes of resources right now. It would require a complete overhaul of our config system. This is a big change and as mentioned in the README we do have a soft code freeze now.
Hi @floreks
Can I start to work on this topic?
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.