terraform icon indicating copy to clipboard operation
terraform copied to clipboard

AzureRM backend support for users to plan without permission to apply

Open aidapsibr opened this issue 4 years ago • 16 comments

Current Terraform Version

0.15.0

Use-cases

Enable users to access a particular storage container with only Storage Blob Data Reader for performing no-lock plans to validate changes locally.

Attempted Solutions

Attempted to assign the role and leverage the use_azuread_auth option on the backend to access a particular storage container.

Proposal

Currently, authorization fails even with use_azuread_auth and appropriate data roles, because the backend tries to listKeys on the storage account. Updating the backend flow to skip this step and instead use AzureAD tokens to access the blobs would enable this directly. This should be a non-breaking change I believe since storage account owner (listKeys) has the blob data access roles.

References

#20831 - The title of this was misleading, you no longer directly need to provide a key, but the backend will still use the keys to access blobs.

aidapsibr avatar Apr 16 '21 21:04 aidapsibr

This work would also enable Storage Blob Data Contributor to be sufficient for running apply and locked plans as an intended side-effect.

aidapsibr avatar Apr 16 '21 21:04 aidapsibr

Curious why this has been backlogged so long. The code just has to be adjusted to use RBAC for the operations rather than an account key, and the way it currently exists means each state file must have its own storage account for security since you basically use the "root password" for all operations, making RBAC like OIDC meaningless and misleading in the documentation.

Users who do not know this esoteric detail are going to think they are safe by putting their state files in separate containers within a storage account, whereas anyone who compromises the terraform access (or simply makes a mistake) can delete all the files in all containers, meaning the annoying overhead of one storage account per state file is necessary for proper security and isolation.

If this is not going to be remediated anytime soon then at least warn people via the documentation that it still uses legacy account key access on the backend despite the "RBAC" configuration and best practice is one storage account per state file or environment.

JustinGrote avatar Jun 01 '22 21:06 JustinGrote

I'm just the one who opened the issue, our solution was to do the same as you described with separate storage accounts which is really unfortunate. Wish this would be prioritized.

aidapsibr avatar Jun 05 '22 22:06 aidapsibr

@aidapsibr my apologies, I saw you were able to edit the tags and assumed you were a maintainer :)

JustinGrote avatar Jun 06 '22 15:06 JustinGrote

Isn’t this issue fixed by using use_azuread_auth = true in Terraform azurerm backend configuration block?

yann-soubeyrand avatar Jun 24 '22 14:06 yann-soubeyrand

I don't think so since user must be Data Owner to have the lock permission.

BzSpi avatar Jun 24 '22 14:06 BzSpi

This is still a major security issue. If you disable access keys on a storage account, remote state no longer functions. The ability to remove the use of root keys is critical in moving everything to managed identities.

KoenR3 avatar Nov 13 '22 22:11 KoenR3

The latest versions seem to do this correctly, I've been able to not provide an access key and have everything in TFState work with RBAC, so my tfstate for different environments can be two containers in the same storage account and work fine now.

JustinGrote avatar Nov 14 '22 16:11 JustinGrote

Yes you do not have to pro ide an access key but when turning off access keys on the storage account the remote state fails...

KoenR3 avatar Nov 14 '22 20:11 KoenR3

What is the status of it? We are facing the same issue.

MarkKharitonov avatar Mar 21 '23 02:03 MarkKharitonov

This is also the case with managed identities as well, the also need permission to list the storage keys, even if the managed identity has the correct permissions to the blob storage container itself

nrjohnstone avatar Jun 30 '23 13:06 nrjohnstone

I would be very happy if TF worked with Storage Blob Data Contributor permissions on the container level. I assumed that's what use_azuread_auth = true would do but apparently in reality, it just uses RBAC permissions to get the access key from the account-level and then TF works just like it works without use_azuread_auth. Access keys make RBAC useless.

kimjamia avatar Dec 04 '23 09:12 kimjamia

Any news about this? This is crucial to be able to manage with RBAC.

g13013 avatar Jan 02 '24 11:01 g13013

This is a security issue! why is this not prioritized ?

g13013 avatar Mar 07 '24 08:03 g13013

Does anyone know if OpenTofu has fixed this problem? If not someone should mirror this issue. I'm no longer using Azure, so not the best representative for it. I imagine this will have more traction there. I've given up on hashicorp. I was a paying customer of terraform cloud for the years this was open too...

aidapsibr avatar Mar 07 '24 08:03 aidapsibr

I've been able to successfully use azurerm backend provider with use_azuread_auth to access (read + write) a state file from a container, without permission to listKeys on the storage account. And I've been doing this for a while now, it's actually working with Terraform as old as 1.4.6.

I just used a custom role and use_azuread_auth = true in backend config. This custom role is assigned to the identity at the storage container scope, so the identity has access only to the container it was assigned to. Here are the permissions the custom role should have:

  permissions {
    actions = [
      "Microsoft.Storage/storageAccounts/blobServices/containers/read"
    ]
    not_actions = []
    data_actions = [
      "Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read",
      "Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write",
      "Microsoft.Storage/storageAccounts/blobServices/containers/blobs/delete",
      "Microsoft.Storage/storageAccounts/blobServices/containers/blobs/deleteBlobVersion/action",
      "Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action"
    ]
  }

Maybe this could be adapted to enable planning without permissions to apply by removing those permissions:

      "Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write",
      "Microsoft.Storage/storageAccounts/blobServices/containers/blobs/delete",
      "Microsoft.Storage/storageAccounts/blobServices/containers/blobs/deleteBlobVersion/action",
      "Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action"

adelinn avatar Sep 19 '24 10:09 adelinn

I've just completed testing of this issue again.

It is perfectly possible to

  1. Create a Storage Blob Data Reader assignment at the container level you are targeting in the backend (and only this permission)
  2. run terraform plan -lock=false and this is sufficient. (A pre-existing tfstate file is required)
  3. The storage account in question has access keys disabled

In my case the azurerm storage account backend is shared and located in a dedicated subscription,

Earliest version I tested this on was terraform 1.2.0 because I am using OIDC auth, could not verify earlier versions but it might work there too.

  backend "azurerm" {
    storage_account_name = "xxx"
    container_name       = "t-tstsp2"
    key                  = "terraform.tfstate"
    use_azuread_auth = true

   # the two settings below are needed on older TF versions but not newer
    subscription_id     = "yyyy"
    resource_group_name = "zzz"
  }

Relevant env vars set in Github Actions workflow:

Image
Initializing the backend...

Successfully configured the backend "azurerm"! Terraform will automatically
use this backend unless the backend configuration changes.

Initializing provider plugins...
- Reusing previous version of hashicorp/azurerm from the dependency lock file
- Using hashicorp/azurerm v4.25.0 from the shared cache directory

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Run terraform plan -input=false -lock=false -out=plan.tfplan -detailed-exitcode

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # azurerm_resource_group.name will be created
  + resource "azurerm_resource_group" "name" {
      + id       = (known after apply)
      + location = "norwayeast"
      + name     = "rgasdasd"
    }

I think this issue should be closed, as from what I see it has been solved a long time ago.

audunsolemdal avatar Mar 31 '25 07:03 audunsolemdal

Thanks for taking the time to submit this issue. It looks like this has been resolved in more recent versions of Terraform (1.2.0 and 1.4.6 like mentioned above). As such, I am going to mark this issue as closed. If that is not the case, please provide additional information including the recent version in which you are not seeing this enhancement, thanks!

rcskosir avatar Mar 31 '25 18:03 rcskosir

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

github-actions[bot] avatar May 01 '25 02:05 github-actions[bot]