aws-iam-authenticator icon indicating copy to clipboard operation
aws-iam-authenticator copied to clipboard

Same ARN twice overwrites previous groups

Open mmack opened this issue 5 years ago • 10 comments

Hey guys,

I made a rather stupid mistake, but the outcome is kinda catastrophic :) If you use the same ARN twice, I guess the groups are not merged but the last one is used alone.

See this example:

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: aws-auth
  namespace: kube-system
data:
  mapRoles: |
    - rolearn: "arn:aws:iam::X:role/for-nodes"
      username: "system:node:{{EC2PrivateDNSName}}"
      groups:
        - system:bootstrappers
        - system:nodes
    - rolearn: "arn:aws:iam::X:role/admins"
      username: "operator"
      groups:
        - "system:masters"
    - rolearn: "arn:aws:iam::X:role/Some-user"
      groups:
        - test-max-dev-group
    - rolearn: "arn:aws:iam::X:role/admins"
      groups:
        - test-andi-dev-group

So by attaching "arn:aws:iam::X:role/admins" twice I removed access to the "system:masters" group and i'm out of business here. The group "test-andi-dev-group" has only access to it's own namespace so I'm not able to change anything in "kube-system" anymore.

Any ideas on getting back access to my cluster than using etdc directly?

Max

mmack avatar Oct 15 '19 06:10 mmack

Merging groups makes sense in this case, but what would the behavior be when username conflicts? Maybe the right thing to do here is to use the first mapping instance alone, that way the new mapping doesn't work, but existing permissions are unaffected.

nckturner avatar Oct 23 '19 00:10 nckturner

access rights are mapped based on the ARN and groups, so the username is just a "display thing", right? So i would just ignore it in the merge process... Or maybe throw a warning...

mmack avatar Oct 23 '19 06:10 mmack

No, the username is the username in Kubernetes when it is provided, so it affects which RBAC is applied.

nckturner avatar Oct 23 '19 18:10 nckturner

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot avatar Jan 21 '20 18:01 fejta-bot

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

fejta-bot avatar Feb 20 '20 19:02 fejta-bot

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close

fejta-bot avatar Mar 21 '20 19:03 fejta-bot

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Mar 21 '20 19:03 k8s-ci-robot

/reopen /remove-lifecycle rotten

nckturner avatar Jun 25 '20 16:06 nckturner

@nckturner: Reopened this issue.

In response to this:

/reopen /remove-lifecycle rotten

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Jun 25 '20 16:06 k8s-ci-robot

/lifecycle frozen

nckturner avatar Jun 25 '20 16:06 nckturner