cluster-api-provider-aws
                                
                                 cluster-api-provider-aws copied to clipboard
                                
                                    cluster-api-provider-aws copied to clipboard
                            
                            
                            
                        e2e tests fail with misleading error message when lacking IAM Create User permissions
/kind bug
What steps did you take and what happened: When running the end to end tests in an AWS account that lacks permissions for IAM Create User or other IAM tasks and they fail in the CloudFormation stack, the error to presented to the user misleads them to believe a timeout was reached.
For example:
STEP: Loading the e2e test configuration from "/home/cmcavoy/indeed/cluster-api-provider-aws/test/e2e/data/e2e_eks_conf.yaml"                                                                                                                                                                                            
STEP: Getting an AWS IAM session - from environment                                                                                                                                                                                                                                                                      
STEP: Creating a bootstrap AWSIAMConfiguration                                                                                                                                                                                                                                                                           
STEP: Creating AWS CloudFormation stack for AWS IAM resources: stack-name=cluster-api-provider-aws-sigs-k8s-io                                                                                                                                                                                                           
I1024 14:31:20.273890  264864 service.go:69] AWS Cloudformation stack "cluster-api-provider-aws-sigs-k8s-io" already exists, updating                                                                                                                                                                                    
STEP: Deleting cluster-api-provider-aws-sigs-k8s-io CloudFormation stack                                                                                                                                                                                                                                                 
STEP: Creating AWS CloudFormation stack for AWS IAM resources: stack-name=cluster-api-provider-aws-sigs-k8s-io                                                                                                                                                                                                           
Failure [129.402 seconds]                                                                                                                                                                                                                                                                                                
[BeforeSuite] BeforeSuite                                                                                                                                                                                                                                                                                                
/home/cmcavoy/indeed/cluster-api-provider-aws/test/e2e/suites/managed/managed_suite_test.go:57                                                                                                                                                                                                                           
                                                                                                                                                                                                                                                                                                                         
  Unexpected error:                                                                                                                                                                                                                                                                                                      
      <*errors.withStack | 0xc0010dfe18>: {                                                                                                                                                                                                                                                                              
          error: <*errors.withMessage | 0xc000817d80>{                                                                                                                                                                                                                                                                   
              cause: <*awserr.baseError | 0xc0005ae300>{                                                                                                                                                                                                                                                                 
                  code: "ResourceNotReady",                                                                                                                                                                                                                                                                              
                  message: "failed waiting for successful resource state",                                                                                                                                                                                                                                               
                  errs: nil,                                                                                                                                                                                                                                                                                             
              },                                                                                                                                                                                                                                                                                                         
              msg: "failed to create AWS CloudFormation stack",                                                                                                                                                                                                                                                          
          },                                                                                                                                                                                                                                                                                                             
          stack: [0x1b19ff5, 0x1b197db, 0x2139417, 0x2144be5, 0x21530e5, 0x4db225, 0x4da71c, 0x18654ba, 0x18635b1, 0x1862fa5, 0x18646be, 0x18642f0, 0x18739d8, 0x187348f, 0x18758e5, 0x187e2a9, 0x187e0be, 0x215309b, 0x525d4b, 0x470561],                                                                               
      }                                                                                                                                                                                                                                                                                                                  
      failed to create AWS CloudFormation stack: ResourceNotReady: failed waiting for successful resource state                                                                                                                                                                                                          
  occurred                                                                                                                                                                                                                                                                                                               
                                                                                                                                                                                                                                                                                                                         
  /home/cmcavoy/indeed/cluster-api-provider-aws/test/e2e/shared/suite.go:135 
However, the real issue was only visible in the AWS CloudFormation stack:
API: iam:CreateUser User: arn:aws:sts::593267375736:assumed-role/ct-devops-admin/cmcavoy is not authorized to perform: iam:CreateUser on resource: arn:aws:iam::593267375736:user/bootstrapper.cluster-api-provider-aws.sigs.k8s.io with an explicit deny in a service control policy

What did you expect to happen: I expected to receive an error message about insufficient privileges or a misconfigured AWS account.
Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.] Slack thread: https://kubernetes.slack.com/archives/CD6U2V71N/p1666639754548709
Environment:
- Cluster-api-provider-aws version: main branch
- Kubernetes version: (use kubectl version): 1.23
- OS (e.g. from /etc/os-release): Ubuntu 22.04
/triage accepted
/priority important-soon
/assign
This issue is labeled with priority/important-soon but has not been updated in over 90 days, and should be re-triaged.
Important-soon issues must be staffed and worked on either currently, or very soon, ideally in time for the next release.
You can:
- Confirm that this issue is still relevant with /triage accepted(org members only)
- Deprioritize it with /priority important-longtermor/priority backlog
- Close this issue with /close
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/
/remove-triage accepted
Will be fixed in #3715
/triage accepted
This issue is labeled with priority/important-soon but has not been updated in over 90 days, and should be re-triaged.
Important-soon issues must be staffed and worked on either currently, or very soon, ideally in time for the next release.
You can:
- Confirm that this issue is still relevant with /triage accepted(org members only)
- Deprioritize it with /priority important-longtermor/priority backlog
- Close this issue with /close
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/
/remove-triage accepted
/triage accepted
This issue is labeled with priority/important-soon but has not been updated in over 90 days, and should be re-triaged.
Important-soon issues must be staffed and worked on either currently, or very soon, ideally in time for the next release.
You can:
- Confirm that this issue is still relevant with /triage accepted(org members only)
- Deprioritize it with /priority important-longtermor/priority backlog
- Close this issue with /close
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/
/remove-triage accepted
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity, lifecycle/staleis applied
- After 30d of inactivity since lifecycle/stalewas applied,lifecycle/rottenis applied
- After 30d of inactivity since lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with /remove-lifecycle stale
- Close this issue with /close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity, lifecycle/staleis applied
- After 30d of inactivity since lifecycle/stalewas applied,lifecycle/rottenis applied
- After 30d of inactivity since lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with /remove-lifecycle rotten
- Close this issue with /close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten