community
community copied to clipboard
Create resource like DynamoDB tables S3 buckets with IAM Policy and Role in one go
I'd like to be able to provision in one single helm chart my application and a dynamoDB table using the corresponding controller and so that there is no extra step to allow the pod running with a specific SA to have access to this dynamoDB table.
A possibly implementation would be to specify the SA that should have access to the dynamoDB table in the definition of the table. The controller wouldn't only create the table, but also the IAM policy, the IAM role and the trust relationship with the SA.
This would be useful for multiple controllers including S3 buckets.
Please let me know if I'm understanding this correctly: You'd like a Helm chart that can create a DynamoDB table, create an IAM role that only has access to that table, attach that role to a service account (using IRSA) and then bind that service account to a deployment?
The IAM controller does not yet support referencing fields between resources, so we can't yet link a newly created policy to a role in the same Helm chart - but that is simple enough to add, we have the tools. Otherwise, I don't think there is anything preventing this from being possible with the use of some clever Helm templating and an update to the IAM controller. It's all a matter of setting up the Helm chart in a way to re-use values and interpolate strings.
A (very) truncated version of the manifests would look like
apiVersion: dynamodb.services.k8s.aws/v1alpha1
kind: Table
metadata:
name: my-table
spec:
tableName: my-table
...
---
apiVersion: iam.services.k8s.aws/v1alpha1
kind: Policy
metadata:
name: table-access-policy
spec:
name: my-table-readonly
policyDocument: |
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"dynamodb:Get*"
],
"Resource": [
"arn:aws:dynamodb:*:*:table/my-table"
]
}
]
}
---
apiVersion: iam.services.k8s.aws/v1alpha1
kind: Role
metadata:
name: table-access-role
spec:
name: my-application-table-role
assumeRolePolicyDocument: |
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::ACCOUNT_ID:oidc-provider/OIDC_PROVIDER"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"OIDC_PROVIDER:sub": "system:serviceaccount:{{ .Release.Namespace }}:my-application-sa"
}
}
}
]
}
policiesRef:
- from:
name: table-access-policy
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-application-sa
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::ACCOUNT_ID:role/table-access-role
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-application
spec:
template:
spec:
serviceAccountName: my-application-sa
containers:
- image: myapp:1.2
name: myapp
...
...
Your Helm chart values file could require ACCOUNT_ID
and OIDC_PROVIDER
, which are also both necessary for setting up the DynamoDB and IAM controllers in the first place.
Does this lead you in the right direction?
@RedbackThomson, thanks for this. This is one way to do it. I'll test this. However, it feels like that the developer of the helm chart has too much power because she/he would be able to create a iam role and policies to access resources she/he haas not created. Probably this could be controlled by OPA and rego rules. I was looking for a shortcut to create the IAM part and also to restrict the access.
has too much power because she/he would be able to create a iam role and policies to access resources she/he haas not created
Yeah this is an inherent risk in developing the IAM controller that we talked about a lot. You essentially need to give a role IAMFullAccess
to operate this controller, which in turn allows you to gain full admin access by creating a new role. Giving anyone RBAC access to the IAM controller should be seriously considered.
I was looking for a shortcut to create the IAM part
This problem does not go away if the S3 or DynamoDB controller were to create the IAM policy. These controllers would now need IAMFullAccess
and someone could potentially abuse it to create arbitrary roles as well.
I think relying on K8s RBAC and/or IAM permissions to control access to creating roles is still the best answer because we already have existing tooling that can filter, allow and block these requests with more customization than we would put into an ACK controller.
The IAM controller does not yet support referencing fields between resources, so we can't yet link a newly created policy to a role in the same Helm chart - but that is simple enough to add, we have the tools.
This sounds great. Even guidance on how to use Status
would go a long way in putting together these great pieces.
This problem does not go away if the S3 or DynamoDB controller were to create the IAM policy. These controllers would now need IAMFullAccess and someone could potentially abuse it to create arbitrary roles as well.
Are you saying that AWS IAM has been around for a decade but its security model has always been illusory? Have you ascended? :)
Are you saying that AWS IAM has been around for a decade but its security model has always been illusory? Have you ascended? :)
I have no clue what this means haha. I was trying to allude that we don't want to reinvent existing, superior (and more granular) authorisation systems, so we will attempt to fall back to RBAC and/or IAM when we can.
Would the following scenario be acceptable: A developer get a namespace attributed to him with an SA that has the right to create pods but also buckets or dynamodb tables via the respective controllers but no IAM objects via the controller. This is where the RBAC from Kubernetes plays its role. Then the S3 and DynamoDB controllers would have rights (but limited) to create IAM roles dedicated to the IRSA between the S3 or DynamoDb resources. This way the admin of the cluster can give some IAM access to the developers on a specific namespace but strictly limited to the resource has has been allowed to create. It's a bit the model that exists in cloud foundry where a developer can decide to create a mySQL instance along her/his application and is given access to credentials via environment variables to this mySQL instance. But this version is better because it's uses IAM roles which is battle tested and doesn't need to expose credentials.
I don't see an easy way to abuse the S3 and DynamoDB controllers to gain IAM access to other resources.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Provide feedback via https://github.com/aws-controllers-k8s/community.
/lifecycle stale
/remove-lifecycle stale
The policy ref section mentioned above is really necessary for this to be useful imo
policiesRef:
- from:
name: table-access-policy
Without being able to do that, I'll have to intervene or associate the policy in another way but then what was the point of doing it in kubernetes at all?
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Provide feedback via https://github.com/aws-controllers-k8s/community.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Provide feedback via https://github.com/aws-controllers-k8s/community.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Provide feedback via https://github.com/aws-controllers-k8s/community.
/close
@eks-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity. Reopen the issue with
/reopen
. Provide feedback via https://github.com/aws-controllers-k8s/community. /close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.