gcp-compute-persistent-disk-csi-driver
gcp-compute-persistent-disk-csi-driver copied to clipboard
[doc] minimal permissions required by GCP driver
The current documentation for driver installation suggests configuring a combination of permissions and roles for the service account utilized by the driver: https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver/blob/e5c925db0ad7233c54c699d9d5d7f2944905470a/docs/kubernetes/user-guides/driver-install.md?plain=1#L17-L21
In the context of OpenShift, our aim is to enhance security by configuring the service account with the least privilege. This is achieved by specifying precise permissions at the most granular level while avoiding roles that may be overly broad.
If this approach is applicable for GCP, we kindly request that the documentation be updated to include a comprehensive list of the exact permissions required by the driver. We have recently adjusted permissions for Azure deployments, which documented driver permissions upstream, and would like to implement similar approach for GCP.
/assign @mattcary
@tyuchn @sunnylovestiramisu @leiyiz Hello, can you please take a look?
A comprehensive list of the exact permissions required by the driver.
Are you saying that we should not have the roles/compute.storageAdmin permission? The rest of it is explicitly listed.
I think the storageAdmin role is appropriate. The permissions granted by the role are necessary for operatation.
Are you requesting to list all the permissions, or is it enough having the recommended roles?
I've double checked with the team that the request originated from and the main motivation seems to be customers with strict policies which forbid the use of roles in some cases. The concern being that roles could provide broader permissions than actually needed - which might not be the case here as @mattcary mentioned "storageAdmin role is appropriate".
So in short: Yes, the request is to have a list of all permissions instead of roles. Although it might not make sense in case the roles are already "precise" - then we'd just basically rewrite what the roles currently have just for the sake of removing roles and I don't see any benefit in that.
On the other hand if the list of all permissions would be reevaluated and some of the items dropped it could help making our deployments more secure - I believe this is the expectation.
@sunnylovestiramisu This was not intended as a statement that some role is "wrong". We actually don't know what the exact set of permissions would be (without roles) and if any permissions could be dropped.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.