terraform-provider-databricks icon indicating copy to clipboard operation
terraform-provider-databricks copied to clipboard

[ISSUE] cannot update `databricks_storage_credential` even if `force_update` is true and other problems

Open jesinity opened this issue 1 year ago • 4 comments

I'm trying to create withouth success a databricks_external_location that is supposed to be using a databricks_storage_credential. I found at least 3 different problems with the databricks_storage_credential

I'm deploying it in Azure Cloud. I'm using last provider, version 1.33.

here is the relevant part

resource "databricks_storage_credential" "dbrk_store_cred_adls" {
  provider     = databricks.account
  name         = "dbrk_store_cred_adls_dev"
  metastore_id =  local.databricks_metastore_id_we
  force_update = true
  azure_service_principal {
    directory_id   = data.azurerm_client_config.current.tenant_id
    application_id = data.azurerm_client_config.current.client_id
    client_secret  = data.azurerm_key_vault_secret.my_password.value
  }
}

First problem: the skip_validation seems not supported for Azure, so the documentation is inconsistent.

Anyway the first time it gets provisioned successfully.

If I re-apply (terraform apply ) I see something that should not be (second problem).

  ~ resource "databricks_storage_credential" "dbrk_store_cred_adls" {
        id           = "dbrk_store_cred_adls_dev"
        name         = "dbrk_store_cred_adls_dev"
        # (4 unchanged attributes hidden)

      ~ azure_service_principal {
          + client_secret  = (sensitive value)
            # (2 unchanged attributes hidden)
        }
    }

Why it sees the client_secret as updated even if it has not changed?

Then, as as soon as I don't attach the external location to it is ok even if it gets always updated without a reason. Then I attach the external location (I spare the details).

First terraform apply is successful, the storage location gets attached to the credentials. Then I re run it and I get the third problem: the force_update is not forcing update.

cannot update storage credential: Storage credential 'dbrk_store_cred_adls' has 0 directly dependent external table(s) and 1 dependent storage location(s); use force option to update anyway.

So in this case it should not update (see second problem) but it does... and even if it does the force-update is not forcing updating.

Quite a nasty behaviour... is there any known workaround for it?

jesinity avatar Jan 05 '24 14:01 jesinity

@jesinity the issue with force_update and skip_validation is likely with the backend API, could you file a support ticket for this?

the client_secret is not returned from the API, so the provider will constantly drift - we need to fix this. Have you considered using managed identity instead of service principal though as it is the recommended approach. There are some limitations with storage credentials using service principals, namely it won't work with storage firewalls

nkvuong avatar Jan 05 '24 22:01 nkvuong

@nkvuong ok so:

  1. Sure, where do I open an bug for the force_update?
  2. The skip_validation is an issue with the terraform module itself, as it is not recognized at all in the terraform code.
  3. I tried with the managed identity and managed to make it work: no fake changes appear in the terraform plan and the storage credentials gets created correctly.

jesinity avatar Jan 06 '24 17:01 jesinity

@jesinity

  1. If you can raise a support ticket for Azure Databricks, via Azure portal
  2. I double-checked, skip_validation is supported for databricks_external_location but not for databricks_storage_credential - we will need to add it

nkvuong avatar Jan 08 '24 14:01 nkvuong

@jesinity 1.34 release has added support for skip_validation - please let us know if this now works for you

nkvuong avatar Jan 17 '24 09:01 nkvuong