containers-roadmap
containers-roadmap copied to clipboard
[EKS][Feature Request]: Automatically remove IAM Roles from “Access Entries”
Community Note
- Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
- Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
- If you are interested in working on this issue or have submitted a pull request, please leave a comment
Tell us about your request We wish EKS to remove from “Access Entries” the IAM Role of Managed Node Group that do not exist anymore.
Which service(s) is this request for? EKS
Tell us about the problem you're trying to solve. What are you trying to do, and why is it hard? In the past we created one Managed Node Group in our EKS cluster. The worker nodes (EC2s) of that Managed Node Group, could join successfully the EKS cluster using the EKS API method. At some point in time, we decided to delete this Managed Node Group from our EKS cluster. The deletion of Managed Node Group from EKS was completed with success however, the IAM Role used by that Managed Node Group was never removed from the “Access Entries”. Later, when we re-provisioned the same Managed Node Group we could verify that couldn't be created successfully due to the fact that the worker nodes could not join the EKS cluster. After manual deletion of old IAM Role from “Access Entries” the worker nodes could successfully register. The IAM Roles are added automatically by AWS into “Access Entries”. We expected that removal should be also done automatically.
Are you currently working around this issue? How are you currently solving this problem? We delete manually the IAM Role from “Access Entries".
Additional context Anything else we should know? We opened an internal ticket to AWS Support with ID 172780501600814.
Did you delete the original IAM role and re-create it with the same name? EKS access entries validates against the principal ID of the role ARN (vs config map which string matched). That could be what happened here. If you did that, then yes you would need to manually delete the access entry.
Hi @mikestef9. Yes, we deleted the original IAM Role and re-created it with the same name. We didn't have this behavior when we use CONFIG_MAP as authentication mode and due to that we were expecting that when using API as authentication mode it would do this deletion automatically.
See the docs callout here starting with "If you ever delete the IAM principal with this ARN..."
I ran into the "delete IAM role and re-create it, lose access to cluster" issue as well.
Short of automatically deleting the access entry, would it be reasonable for the AWS console to indicate that the IAM role referenced in the Access Entry is invalid? There isn't even indication from the API that the entry is tied to the unique ID under the covers, so it was actually rather difficult for us to discover the root cause.
Just ran into this also. The thing is, we do not create the access entry for nodes when the cluster is in API mode -- EKS creates the access entry on its own. So when deleting the node group, we absolutely expect EKS to also delete the access entry for that role, if there are no remaining node groups using that role. EKS shouldn't leave the stale access entry lying around.
I also just ran into this problem with fargate access entries. After completely tearing down a fargate profile and its IAM role and then trying to spin it back up, fargate was not able to schedule pods assigned to the new profile. Since fargate access entries are not user managed, it does not seem like it would make sense to make the user responsible for deletion when a profile is deleted. At the very least I would expect that EKS is able to create a new Access Entry for the recreated fargate profile that works with its new role. That doesn't seem to be the case, and there was no indication that there was an issue with the profile's access entry on the console.
My primary issue with this is that if you remove and recreate Managed Node Goups, the Node Instance Role of the old node group is deleted, but the access entry referencing that now non-existing role ARN still remains.
so over time (creating new managed node groups on every EKS upgrade), these orphaned Access Entries pile up. I confirmed with the eksctl developers that they do not create the access entry, it is created by AWS when creating the managed nodegroup: https://github.com/eksctl-io/eksctl/issues/8516
So from a clean ownership principle point-of-view: if EKS creates it, it owns it, and it should delete it when no longer required. If not, it should not create it in the first place, but leave it to the user/api client (for me, eksctl).
The "something else might use that role" argument is invalid, as the IAM Role itself is deleted by eksctl, so the Access Entry points into the void.
I encountered this issue today too when removing and recreating EKS Fargate profiles, there is very little indication that this is due to the access entry, I only found it from the Kube API Server logs showing an authentication error. The access entry should be removed when the IAM role and the Fargate profile are removed.
time="2025-10-15T22:06:24Z" level=warning msg="access denied" arn="arn:aws:iam:::role/kube-system-eks-fargate-role" client="127.0.0.1:59224" error="identity is not mapped" method=POST path=/authenticate