configure-aws-credentials icon indicating copy to clipboard operation
configure-aws-credentials copied to clipboard

Assuming multiple roles for a Terraform deployment

Open lpossamai opened this issue 4 years ago • 5 comments

Hello,

I have a question on how can I use configure-aws-credentials to assume multiple roles so that my TF provider.tf file can apply all the necessary changes to multiple accounts?

Example: In my PROD workspace, I need to deploy to TEST and DEV workspaces. In my provider.tf file I have the following:

provider "aws" {
  region = "ap-southeast-2"
  assume_role {
    role_arn = local.role_arns[terraform.workspace]
  }
}

provider "aws" {
  alias  = "test"
  region = "ap-southeast-2"
  assume_role {
    role_arn = local.role_arns.test
  }
}

provider "aws" {
  alias  = "staging"
  region = "ap-southeast-2"
  assume_role {
    role_arn = local.role_arns.staging
  }
}

In my Github Actions workflow I have the following:

steps:
      - name: Checkout
        uses: actions/checkout@v3
      - name: Configure AWS Credentials
        uses: aws-actions/configure-aws-credentials@v1
        with:
          role-to-assume: ${{ env.iam_role_to_assume_prod }}
          aws-region: ${{ env.AWS_REGION }}

But that gives me an error, because Github didn't have permissions to assume the other two roles staging and test.

Error: NoCredentialProviders: no valid providers in chain. Deprecated.
	For verbose messaging see aws.Config.CredentialsChainVerboseErrors

Is there a workaround this? Any suggestions is welcome.

Thanks!

lpossamai avatar Apr 25 '22 08:04 lpossamai

facing same issue and would love to find a solution

alicancakil avatar May 18 '22 04:05 alicancakil

I can provide a suitable solution for the multiple regions & multiple accounts. here is the repo link: http://github.com/startuplcoud/infra-multi-account-region-startup-kit/ but I need to update the document and much more details.

nooperpudd avatar Jun 18 '22 09:06 nooperpudd

This is how I manage in my pipeline:

      - name: Terraform Validate
        working-directory: ./ProvisionAWSGlobal
        id: validate
        run: terraform validate -no-color
        env:
          AWS_ACCESS_KEY_ID: "${{ secrets.APPID }}"
          AWS_SECRET_ACCESS_KEY: "${{ secrets.APPSECRET }}"
        continue-on-error: true 

And in my terraform:

provider "aws" {
  alias  = "base"
  region = var.deploy_region
  default_tags {
    tags = {
      managed_by = "Terraform"
    }
  }
}

provider "aws" {
  alias  = "other"
  region = var.deploy_region
  assume_role {
    role_arn     = "arn:aws:iam::${var.management_account_id}:role/${rolename}"
  }
  default_tags {
    tags = {
      managed_by = "Terraform"
    }
  }
}

HanwhaARudolph avatar Jul 01 '22 22:07 HanwhaARudolph

We do Assume Role twice to manage multiple provider situations like this case.

[GithubAction] -----------------------> [prod_role] -----------------------> [staging_role] 
                 assume role with                     assume role with
             configure-aws-credentials              terraform assume_role

[GithubAction] -----------------------> [prod_role] -----------------------> [test_role] 
                 assume role with                     assume role with
             configure-aws-credentials              terraform assume_role

kono2021 avatar Aug 25 '22 01:08 kono2021

We do Assume Role twice to manage multiple provider situations like this case.

[GithubAction] -----------------------> [prod_role] -----------------------> [staging_role] 
                 assume role with                     assume role with
             configure-aws-credentials              terraform assume_role

[GithubAction] -----------------------> [prod_role] -----------------------> [test_role] 
                 assume role with                     assume role with
             configure-aws-credentials              terraform assume_role

If you could provide example code, that would be awesome please.

lpossamai avatar Aug 25 '22 01:08 lpossamai

Hello, i recently encountered this same issue. is there any update on a fix?

CyberViking949 avatar Feb 15 '23 18:02 CyberViking949

@CyberViking949 This advice worked for me to assume multiple roles https://github.com/aws-actions/configure-aws-credentials/issues/636#issuecomment-1418641641

Constantin07 avatar Feb 16 '23 00:02 Constantin07

@CyberViking949 This advice worked for me to assume multiple roles #636 (comment)

Thanks @Constantin07, however this requires static access keys setup. The whole reason i was leveraging this action was to use the Github OIDC provider in aws. so im assuming a role in an identity account to assume a role in a prod/dev account all using ephemeral tokens.

Action assume role --> Identity role (this action) --> backend role for s3 statefiles --> Child role for plan/apply.

the backend role is assumed properly and state is pulled. However plan/apply is not using the role defined in the provider and is instead using the role from the identity account

CyberViking949 avatar Feb 16 '23 16:02 CyberViking949

I'm standing on the shoulders of giants with this, but here is something that I whipped up to meet my use case: https://github.com/marketplace/actions/configure-aws-profile

mcblair avatar Mar 19 '23 20:03 mcblair

Thanks for sharing this mcblair, this is excellent. I'm going to be closing this issue in favor of https://github.com/aws-actions/configure-aws-credentials/issues/112, as I suspect once #112 is implemented that will work for this use case. Let me know if you disagree and I can reopen this issue

peterwoodworth avatar Jul 03 '23 22:07 peterwoodworth

Comments on closed issues are hard for our team to see. If you need more assistance, please either tag a team member or open a new issue that references this one. If you wish to keep having a conversation with other community members under this issue feel free to do so.

github-actions[bot] avatar Jul 03 '23 22:07 github-actions[bot]

Hi @peterwoodworth ,

I disagree that #112 will fix this issue here. #112 will use profiles and not the IAM Roles. That would cause a very long pipeline config file, depending on your setup, and lots and lots of Github Secrets to configure... which isn't something practical.

If we take the #112 example:

- name: Add Dev profile credentials to ~/.aws/credentials
   env:
      AWS_ACCESS_KEY_ID: ${{ secrets.DEV_AWS_ACCESS_KEY_ID }}
      AWS_SECRET_ACCESS_KEY: ${{ secrets.DEV_AWS_SECRET_ACCESS_KEY }}
   run: |
      aws configure set aws_access_key_id $DEV_AWS_ACCESS_KEY_ID --profile my-app-name-dev
      aws configure set aws_secret_access_key $DEV_AWS_SECRET_ACCESS_KEY --profile my-app-name-dev

- name: Add Staging profile credentials to ~/.aws/credentials
   env:
      AWS_ACCESS_KEY_ID: ${{ secrets.STAGING_AWS_ACCESS_KEY_ID }}
      AWS_SECRET_ACCESS_KEY: ${{ secrets.STAGING_AWS_SECRET_ACCESS_KEY }}
   run: |
      aws configure set aws_access_key_id $STAGING_AWS_ACCESS_KEY_ID --profile my-app-name-staging
      aws configure set aws_secret_access_key $STAGING_AWS_SECRET_ACCESS_KEY --profile my-app-name-staging

- name: Add Prod profile credentials to ~/.aws/credentials
   env:
      AWS_ACCESS_KEY_ID: ${{ secrets.PROD_AWS_ACCESS_KEY_ID }}
      AWS_SECRET_ACCESS_KEY: ${{ secrets.PROD_AWS_SECRET_ACCESS_KEY }}
   run: |
      aws configure set aws_access_key_id $PROD_AWS_ACCESS_KEY_ID --profile my-app-name-prod
      aws configure set aws_secret_access_key $PROD_AWS_SECRET_ACCESS_KEY --profile my-app-name-prod

what I propose is a way to support multiple AWS authentication using IAM Roles.

lpossamai avatar Jul 03 '23 22:07 lpossamai

Thanks @lpossamai, I see why profiles doesn't solve this for you

I'm curious to know more about how exactly you're using this action within your workflow, and what exactly you're doing in terraform. I'm unfamiliar with terraform, is there one command that you're running in one step, and you need to be able to assume multiple roles at once for this one terraform command to work?

peterwoodworth avatar Jul 03 '23 23:07 peterwoodworth

Hi @peterwoodworth , thanks for your prompt reply.

TBH, I have changed the way I use Terraform and authenticate with AWS. So, this issue is not needed for me and I cannot replicate it anymore. Looking at this further, I realize now that the limitation I was facing is not something that needs and can be fixed by the maintainers of aws-actions/configure-aws-credentials. It should be addressed at the Terraform level.


A little background for further reference.

Before the change I made, I was using Github Actions to deploy my infrastructure to AWS with Terraform. A sample code would be:

// terraform/elb/main.tf
resource "aws_lb" "alb" {
  count                      = terraform.workspace == "test" || terraform.workspace == "staging" ? 1 : 0
  name                       = "example-${terraform.workspace}-alb"
  internal                   = false
  load_balancer_type         = "application"
  security_groups            = [aws_security_group.alb[count.index].id]
  subnets                    = data.terraform_remote_state.network.outputs.public_subnets
  idle_timeout               = 300
  enable_deletion_protection = true
  enable_http2               = true
  preserve_host_header       = true
  drop_invalid_header_fields = true

  access_logs {
    bucket  = module.alb_log_bucket[count.index].s3_bucket_id
    prefix  = terraform.workspace
    enabled = true
  }

  tags = merge({
    Environment = terraform.workspace
  }, var.tags)
}

The github workflow for that particular folder would look like this:

jobs:
  ELB-TEST:
    name: "ELB-TEST"
    runs-on: ubuntu-latest
    environment: test
    env:
      TF_VAR_iam_role_to_assume_test: ${{ secrets.iam_role_to_assume_test }}
      ENVIRONMENT: test
    defaults:
      run:
        working-directory: ${{ env.WORKING_DIRECTORY }}

    steps:
      - name: Checkout
        uses: actions/checkout@v3
      - name: Configure AWS Credentials
        uses: aws-actions/configure-aws-credentials@v2
        with:
          role-to-assume: ${{ env.TF_VAR_iam_role_to_assume_test }}
          role-session-name: github-ELB-test
          aws-region: ${{ env.AWS_REGION }}

      - name: Setup Terraform
        uses: hashicorp/setup-terraform@v2

      - name: Terraform Format
        id: fmt
        run: terraform fmt -check -recursive

      - name: Terraform Init
        id: init
        run: |
          terraform init -backend-config="role_arn=$TF_VAR_iam_role_to_terraform_backend"

      - name: Terraform Validate
        id: validate
        run: |
          terraform validate -no-color
        env:
          TF_WORKSPACE: test
          TF_IN_AUTOMATION: true

      - name: Terraform Plan
        id: plan
        if: github.event_name == 'pull_request'
        run: terraform plan -input=false -out=tf_plan_out_${{ env.ENVIRONMENT }}_${{ env.TF_MODULE_NAME }}.tfplan
        continue-on-error: false
        env:
          TF_WORKSPACE: test
          TF_IN_AUTOMATION: true

      - name: Terraform Apply
        if: github.ref == 'refs/heads/main' && github.event_name == 'push'
        run: terraform apply -input=false -auto-approve tf_plan_out_${{ env.ENVIRONMENT }}_${{ env.TF_MODULE_NAME }}.tfplan
        env:
          TF_WORKSPACE: test
          TF_IN_AUTOMATION: true

  ELB-STAGING:
    name: "ELB-STAGING"
    runs-on: ubuntu-latest
    needs: ELB-TEST
    environment: staging
    env:
      TF_VAR_iam_role_to_assume_staging: ${{ secrets.iam_role_to_assume_staging }}
      ENVIRONMENT: staging
    defaults:
      run:
        working-directory: ${{ env.WORKING_DIRECTORY }}

    steps:
      - name: Checkout
        uses: actions/checkout@v3
      - name: Configure AWS Credentials
        uses: aws-actions/configure-aws-credentials@v2
        with:
          role-to-assume: ${{ env.TF_VAR_iam_role_to_assume_staging }}
          role-session-name: github-ELB-staging
          aws-region: ${{ env.AWS_REGION }}

      - name: Setup Terraform
        uses: hashicorp/setup-terraform@v2

      - name: Terraform Format
        id: fmt
        run: terraform fmt -check -recursive

      - name: Terraform Init
        id: init
        run: |
          terraform init -backend-config="role_arn=$TF_VAR_iam_role_to_terraform_backend"

      - name: Terraform Validate
        id: validate
        run: |
          terraform validate -no-color
        env:
          TF_WORKSPACE: staging
          TF_IN_AUTOMATION: true

      - name: Terraform Plan
        id: plan
        if: github.event_name == 'pull_request'
        run: terraform plan -input=false -out=tf_plan_out_${{ env.ENVIRONMENT }}_${{ env.TF_MODULE_NAME }}.tfplan
        continue-on-error: false
        env:
          TF_WORKSPACE: staging
          TF_IN_AUTOMATION: true

      - name: Terraform Apply
        if: github.ref == 'refs/heads/main' && github.event_name == 'push'
        run: terraform apply -input=false -auto-approve tf_plan_out_${{ env.ENVIRONMENT }}_${{ env.TF_MODULE_NAME }}.tfplan
        env:
          TF_WORKSPACE: staging
          TF_IN_AUTOMATION: true

So not very good as I would have to have a Job for each of my environments and for each of my terraform/** folders/modules. And not only that, but what if I want to deploy to multiple accounts in the same PR? That wouldn't be possible.

What I ended up doing was:

  1. Moved my TF backend to a Shared-Services AWS Account
  2. Implemented Terragrunt in my repository to help keeping the code DRY
  3. Implemented Terrateam as my new CI solution

This allows me to deploy to multiple accounts now in the same PR using provider = aws.alias. You can check this diagram to understand this concept now.

Safe to close this issue now. Thanks!

lpossamai avatar Jul 04 '23 03:07 lpossamai

Comments on closed issues are hard for our team to see. If you need more assistance, please either tag a team member or open a new issue that references this one. If you wish to keep having a conversation with other community members under this issue feel free to do so.

github-actions[bot] avatar Jul 04 '23 03:07 github-actions[bot]

Sorry, I didn't read all the comments before replying. #112 covers my requirements

benabineri avatar Jul 04 '23 08:07 benabineri