containers-roadmap
containers-roadmap copied to clipboard
[EKS] [bugfix]: Federated roles containing paths don't work properly with EKS
Greetings,
I apologize if this is not the right venue for this. I have tried to post this in the forums but I was greeted with a 500 error :(
I have authentication issues while issuing kubectl commands against an EKS cluster, while assuming a role through Okta. The role ARN is arn:aws:iam::XXXXXXXXXXXX:role/teams/tooling/dev-tooling-operator
Upon examining the authenticator logs, I noticed that IAM(?) is reporting an ARN without the path in it back to the authenticator:
time="2019-11-12T22:05:47Z" level=warning msg="access denied" arn="arn:aws:iam::XXXXXXXXXXXX:role/dev-tooling-operator" client="127.0.0.1:49822" error="ARN is not mapped: arn:aws:iam::XXXXXXXXXXXX:role/dev-tooling-operator" method=POST path=/authenticate
Are you currently working around this issue? I recreated my IAM roles without paths in them.
Is this really the expected behaviour or is it indeed a bug? Regards, Luiz
You need to map the role without the path in the aws-auth
configmap. Reason seems to be that the authenticator only has the result of STS' GetCallerIdentity
(which doesn't have the role path) at hand when comparing the mapping.
I can confirm that this works for non-federated roles as well.
This is relevant to https://github.com/aws/containers-roadmap/issues/474
Tracked in aws-iam-authenticator
as https://github.com/kubernetes-sigs/aws-iam-authenticator/issues/153
In case anyone wanders into this... After hours of thinking, screaming, contemplating... I had to do this in CDK world that adds a duplicate role mapping for the node role without the path:
....
....
node_role = iam.Role(
self,
f"EksBootstrapNodeRole",
assumed_by=iam.ServicePrincipal("ec2.amazonaws.com"),
description=f"Node role for EKS cluster: {cluster_name}",
path="/ci-runners/",
managed_policies=[
iam.ManagedPolicy.from_aws_managed_policy_name(
"AmazonEKSWorkerNodePolicy"
),
iam.ManagedPolicy.from_aws_managed_policy_name(
"AmazonEC2ContainerRegistryReadOnly"
),
iam.ManagedPolicy.from_aws_managed_policy_name(
"AmazonSSMManagedInstanceCore"
),
],
)
# aws-iam-authenticator removes path from the assumed-role causing Kubernetes authenticator to fail server-side.
role_arn_for_aws_auth = Fn.join(
"",
[
"arn:",
Aws.PARTITION,
":iam::",
Aws.ACCOUNT_ID,
":role/",
node_role.role_name,
],
)
role_for_aws_auth = iam.Role.from_role_arn(
self, "EksBootstrapNodeRoleForAwsAuth", role_arn_for_aws_auth
)
cluster.aws_auth.add_role_mapping(
role_for_aws_auth,
username="system:node:{{EC2PrivateDNSName}}",
groups=[
"system:bootstrappers",
"system:nodes",
],
)
....
....
Addressed with #185