aws-iam-authenticator
aws-iam-authenticator copied to clipboard
Can I not add an IAM group to my ConfigMap?
I have an IAM user named Alice
, and she's a member of the IAM group eks-admin
.
The following configuration works, but when I remove Alice from mapUsers
, kubectl
commands give me the error error: You must be logged in to the server (Unauthorized)
.
Can't I add an IAM group to this ConfigMap, just like I can add a user or role?
aws sts get-caller-identity
{
"Account": "123456789012",
"UserId": "AIDAxxxxxxxxxxxxxxx",
"Arn": "arn:aws:iam::123456789012:user/Alice"
}
apiVersion: v1
data:
mapRoles: |
- rolearn: arn:aws:iam::123456789012:role/EKS-WorkerNodes-NodeInstanceRole-1R46GDBD928V5
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
mapUsers: |
- userarn: arn:aws:iam::123456789012:user/Alice
username: alice
groups:
- system:masters
- userarn: arn:aws:iam::123456789012:group/eks-admin
username: eks-admin
groups:
- system:masters
Didn't read, you can only add roles and users.
~~I think this is a duplicate of # 157 (which is probably a duplicate of another, honestly)~~
Can we re-open this as a feature request? Managing permissions would be significantly improved if we could add groups.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Is there any progress on this one? This is significant functionality for managing access to EKS for larger Engineering groups, right now this is requiring me to list out a bunch of users and adding to the list every time someone new needs access.
With terraform 12 this can be easily workarounded, thanks to the for loop. A very improbable code:
locals {
k8s_admins = [
for user in data.terraform_remote_state.iam.outputs.admin_members[0] :
{
user_arn = join("", ["arn:aws:iam::whatever:user/", user])
username = user
group = "system:masters"
}
]
k8s_developers = [
for user in data.terraform_remote_state.iam.outputs.developers_members[0] :
{
user_arn = join("", ["arn:aws:iam::whatever:user/", user])
username = user
group = "system:developers-write"
}
]
k8s_map_users = concat(local.k8s_admins, local.k8s_developers)
}
Is there any progress on this one? This is significant functionality for managing access to EKS for larger Engineering groups, right now this is requiring me to list out a bunch of users and adding to the list every time someone new needs access.
It's even more fun when you don't have IAM users and everybody accesses an assumed role session via Okta.
With terraform 12 this can be easily workarounded, thanks to the for loop. A very improbable code:
locals { k8s_admins = [ for user in data.terraform_remote_state.iam.outputs.admin_members[0] : { user_arn = join("", ["arn:aws:iam::whatever:user/", user]) username = user group = "system:masters" } ] k8s_developers = [ for user in data.terraform_remote_state.iam.outputs.developers_members[0] : { user_arn = join("", ["arn:aws:iam::whatever:user/", user]) username = user group = "system:developers-write" } ] k8s_map_users = concat(local.k8s_admins, local.k8s_developers) }
I'm struggling to put the k8s_admins generated here into the configmap to apply later in automation, how did you manage to do that?
/remove-lifecycle stale
With terraform 12 this can be easily workarounded, thanks to the for loop. A very improbable code:
locals { k8s_admins = [ for user in data.terraform_remote_state.iam.outputs.admin_members[0] : { user_arn = join("", ["arn:aws:iam::whatever:user/", user]) username = user group = "system:masters" } ] k8s_developers = [ for user in data.terraform_remote_state.iam.outputs.developers_members[0] : { user_arn = join("", ["arn:aws:iam::whatever:user/", user]) username = user group = "system:developers-write" } ] k8s_map_users = concat(local.k8s_admins, local.k8s_developers) }
I'm struggling to put the k8s_admins generated here into the configmap to apply later in automation, how did you manage to do that?
probably using jsonencode(local.k8s_admins)
I would also like to see adeakrvbd's question answered above. +1
Me too. It's really weird to not support IAM groups.
Please consider adding IAM group support for EKS. This would be the easiest way to manage user access control by far.
Till we have this, I have come up with a strategy using AssumeRole which I describe in my blog post.
@prestonvanloon I know we discussed in a different thread, but I basically do the same thing that @amitsaha describes in his blog post, although mine seems a bit simpler:
- Create IAM roles for a Readonly and Admin
- Attach these role ARN's under the mapUsers section of aws-auth.yaml
- Create an IAM group for Readonly and Admin
- Add an AssumeRole policy for each group "Readonly" and "Admin" so be able to assume the roles created in step 1.
- Ensure I have a trust relationship setup to allow account users to assume the 2 roles.
- Once I apply, everyone in the IAM groups have their respective permissions. All they have to do is follow the assumerole instructions here: https://aws.amazon.com/premiumsupport/knowledge-center/iam-assume-role-cli/
I'll obviously automate the last step so people don't have to run the commands and set the keys, but yeah, that's pretty much it. It's not pretty, but it allows me to abstract user management into a group which is really the goal here since IAM Groups are still not supported.
@adeakrvbd @jclynny I used yamlencode
to get this working. You can see more code here https://github.com/dockup/terraform-aws/commit/fd8c679a8bdea533a903d5cb12b8aa7d41c5b632#diff-a338da04c3bdfe4c0e6b5db98bc233bdR93
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
@adeakrvbd @jclynny here's a complete example how I build the userMap in terraform 0.12 based on IAM users bound to IAM groups.
data "aws_iam_group" "developer-members" {
group_name = "developer"
}
data "aws_iam_group" "admin-members" {
group_name = "admin"
}
locals {
k8s_admins = [
for user in data.aws_iam_group.admin-members.users :
{
user_arn = user.arn
username = user.user_name
groups = ["system:masters"]
}
]
k8s_analytics_users = [
for user in data.aws_iam_group.developer-members.users :
{
user_arn = user.arn
username = user.user_name
groups = ["company:developer-users"]
}
]
k8s_map_users = concat(local.k8s_admins, local.k8s_analytics_users)
}
resource "kubernetes_config_map" "aws_auth" {
metadata {
name = "aws-auth"
namespace = "kube-system"
}
data = {
mapRoles = <<YAML
- rolearn: ${module.eks.eks_worker_node_role_arn}
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
YAML
mapUsers = yamlencode(local.k8s_map_users)
mapAccounts = <<YAML
- "${data.aws_caller_identity.current.account_id}"
YAML
}
}
The downside is that you will need to run "terraform apply" each time you add or remove users from IAM groups and that one user shouldn't be in more than one group at a time.
I came across this: https://github.com/ygrene/iam-eks-user-mapper. Maybe this is a viable workaround for you?
/kind feature /lifecycle frozen
/remove-lifecycle stale
please reconsider support for IAM group!
I can get it working with either IAM role or IAM user with tf. However, for our use case where we're trying to use Hashicorp Vault to grant dynamic time-bound access longer than 12hours (which is the max session duration for IAM role based approach), capability to map k8s group to IAM group is key.
If your company is using SSO (via Okta, for example), there are no IAM users and everyone is using assumed roles with temporary credentials. This makes it impossible for our developers to use EKS in a sane way and hits enterprise customers the hardest.
I've created a cluster using @aws-cdk/aws-eks (I believe it would be same for quickstart). It creates a cluster with a dedicated role since EKS has a weird rule:
When an Amazon EKS cluster is created, the IAM entity (user or role) that creates the cluster is added to the Kubernetes RBAC authorization table as the administrator (with system:masters permissions). https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html
Yesterday, new console has been deployed for EKS. It queries the cluster directly to get nodes and workloads: https://aws.amazon.com/blogs/containers/introducing-the-new-amazon-eks-console/
But I only see the following error with IAM user.
Error loading Namespaces
Unauthorized: Verify you have access to the Kubernetes cluster
I've tried some ways, but I have some nitpicks for every one of them:
- I can switch to the role that created the cluster or enlisted in
mapRole
-> but I have to applyeks:*
policies to it, and switching to a role in a console is a little bit cumbersome. -
mapUser
every IAM user that has 'eks:DescribeCluster' to some group, and bind them toview
ClusterRole. -> every -
mapAccount
-> then I seeError loading Namespaces namespaces is forbidden: User "arn:aws:iam::<accountId>:user/<userName>" cannot list resource "namespaces" in API group "" at the cluster scope
-
mapAccount
and bind every mapped accounts that has 'eks:DescribeCluster' toview
ClusterRole. -> every
I wish some other ways:
- map a IAM group to a k8s group. (best)
- apply k8s groups to
mapAccount
(would be too permissive?) - at least other out-of-box ways to take errors away for them.
+1 for this feature!
+1 for this feature!
+1 for this feature!
+1
+1