aws-efs-csi-driver
aws-efs-csi-driver copied to clipboard
controller.tags does not work with AmazonEFSCSIDriverPolicy
/kind bug
What happened?
When deploying the latest version of the helm chart (probably happening with any other version as well) and specifying any additional tags for controller.tags
this tags are added to the AccessPoint.
As the AmazonEFSCSIDriverPolicy
only allows the elasticfilesystem:TagResource
and elasticfilesystem:CreateAccessPoint
for the tag efs.csi.aws.com/cluster
you get an AccessDenied.
If you then add an additional custom policy to the role with the following permissions it works:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"elasticfilesystem:TagResource",
"elasticfilesystem:CreateAccessPoint"
],
"Resource": "*"
}
]
}
What you expected to happen? Make the policy allow this actions or document the need of a custom policy if tags are specified
How to reproduce it (as minimally and precisely as possible)? Just add a custom tag to the controller:
set {
name = "controller.tags.customTag"
value = "1234"
}
Please also attach debug logs to help us better diagnose In the pod you get:
I0430 19:51:33.573336 1 controller.go:289] Using user-specified structure for access point directory.
I0430 19:51:33.573395 1 controller.go:295] Appending PVC UID to path.
I0430 19:51:33.573428 1 controller.go:313] Using /dynamic_provisioning/pvc-57b7e5f6-6d5c-4b37-96a8-f187addcc915-00019b17-0d6c-4041-868c-5d445bd402ca as the access point directory.
E0430 19:51:33.601089 1 driver.go:106] GRPC error: rpc error: code = Unauthenticated desc = Access Denied. Please ensure you have the right AWS permissions: Access denied
In cloudtrail for the CreateAccessPoint
event you get:
"errorMessage": "User: arn:aws:sts::xxxx:assumed-role/xxxx-role-csi-efs/1714504702512126936 is not authorized to perform: elasticfilesystem:TagResource on the specified resource",
and once sort that that you get:
"errorMessage": "User: arn:aws:sts::xxxx:assumed-role/xxxx-role-csi-efs/1714504702512126936 is not authorized to perform: elasticfilesystem:CreateAccessPoint on the specified resource",
Hi @emboss64 , we cannot modify the AmazonEFSCSIDriverPolicy policy to allow arbitrary tags on the access point. This could be a security risk and lead to privilege escalation, as tags are often used for controlling access to resources. If you choose to do this, you'll need to create a separate policy.
I know, that's why I suggested adding proper documentation that adding tags also requires the use of an additional policy
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.