terraform-provider-databricks icon indicating copy to clipboard operation
terraform-provider-databricks copied to clipboard

[ISSUE] Creation of storage credential with delegated permission does not work

Open guderkar opened this issue 8 months ago • 17 comments

According to this release it should be now possible to delegate creation of storage credential using this privilege

If principal does not have this privilege we get expected error: User does not have CREATE STORAGE CREDENTIAL on metastore

image

But after we assign this privilege to the principal we still cannot create storage credential, but different error, just saying cannot create storage credential.

image

I tested the same thing using Databricks Web UI and it works.

guderkar avatar Oct 20 '23 10:10 guderkar

Also seeing this issue

benkelly123 avatar Nov 01 '23 17:11 benkelly123

Are you using an account level provider and also included a metastore_id variable in the resource definition?

binnykmathew-tlnr avatar Dec 09 '23 03:12 binnykmathew-tlnr

Please provide your source code & logs.

alexott avatar Dec 09 '23 09:12 alexott

Tested both workspace and account provider

provider "databricks" {
  alias               = "workspace"
  host                = "https://adb-XXX.Y.azuredatabricks.net"
  azure_tenant_id     = "__tenant_id__"
  azure_client_id     = "__app_id__"
  azure_client_secret = "__app_secret__"
}

provider "databricks" {
  alias               = "account"
  host                = "https://accounts.azuredatabricks.net"
  account_id          = "__account_id__"
  azure_tenant_id     = "__tenant_id__"
  azure_client_id     = "__app_id__"
  azure_client_secret = "__app_secret__"
}
resource "databricks_storage_credential" "this" {
  provider = databricks.workspace
  name = "test-credential"
  azure_managed_identity {
    access_connector_id = "/subscriptions/3e4ab56a-21ee-4c7f-b438-6f831990ff7f/resourceGroups/kg-demo-dev/providers/Microsoft.Databricks/accessConnectors/kg-demo-dbx-conn"
  }
}
resource "databricks_storage_credential" "this" {
  provider = databricks.account
  metastore_id = "__metastore_id__"
  name = "test-credential"
  azure_managed_identity {
    access_connector_id = "/subscriptions/3e4ab56a-21ee-4c7f-b438-6f831990ff7f/resourceGroups/kg-demo-dev/providers/Microsoft.Databricks/accessConnectors/kg-demo-dbx-conn"
  }
}

When principal has CREATE_STORAGE_CREDENTIAL privilege on metastore.

For workspace provider I get image

For account provider I get image

When I enable account admin for the principal it works.

Workspace provider logs

2023-12-09T10:57:56.582+0100 [INFO]  backend/local: apply calling Apply
2023-12-09T10:57:56.582+0100 [DEBUG] Building and walking apply graph for NormalMode plan
2023-12-09T10:57:56.582+0100 [DEBUG] Resource state not found for node "databricks_storage_credential.this", instance databricks_storage_credential.this
2023-12-09T10:57:56.583+0100 [DEBUG] ProviderTransformer: "databricks_storage_credential.this (expand)" (*terraform.nodeExpandApplyableResource) needs provider["registry.terraform.io/databricks/databricks"].workspace
2023-12-09T10:57:56.583+0100 [DEBUG] ProviderTransformer: "databricks_storage_credential.this" (*terraform.NodeApplyableResourceInstance) needs provider["registry.terraform.io/databricks/databricks"].workspace
2023-12-09T10:57:56.583+0100 [DEBUG] pruning unused provider["registry.terraform.io/databricks/databricks"].account
2023-12-09T10:57:56.583+0100 [DEBUG] pruning unused provider["registry.terraform.io/databricks/databricks"]
2023-12-09T10:57:56.584+0100 [DEBUG] pruning unused provider["registry.terraform.io/hashicorp/azurerm"]
2023-12-09T10:57:56.584+0100 [DEBUG] pruning unused provider["registry.terraform.io/azure/azapi"]
2023-12-09T10:57:56.585+0100 [DEBUG] ReferenceTransformer: "databricks_storage_credential.this (expand)" references: []
2023-12-09T10:57:56.585+0100 [DEBUG] ReferenceTransformer: "databricks_storage_credential.this" references: []
2023-12-09T10:57:56.585+0100 [DEBUG] ReferenceTransformer: "provider[\"registry.terraform.io/databricks/databricks\"].workspace" references: []
2023-12-09T10:57:56.586+0100 [DEBUG] Starting graph walk: walkApply
2023-12-09T10:57:56.587+0100 [DEBUG] created provider logger: level=debug
2023-12-09T10:57:56.587+0100 [INFO]  provider: configuring client automatic mTLS
2023-12-09T10:57:56.602+0100 [DEBUG] provider: starting plugin: path=.terraform/providers/registry.terraform.io/databricks/databricks/1.31.1/windows_amd64/terraform-provider-databricks_v1.31.1.exe args=[.terraform/providers/registry
.terraform.io/databricks/databricks/1.31.1/windows_amd64/terraform-provider-databricks_v1.31.1.exe]
2023-12-09T10:57:56.607+0100 [DEBUG] provider: plugin started: path=.terraform/providers/registry.terraform.io/databricks/databricks/1.31.1/windows_amd64/terraform-provider-databricks_v1.31.1.exe pid=17676
2023-12-09T10:57:56.608+0100 [DEBUG] provider: waiting for RPC address: path=.terraform/providers/registry.terraform.io/databricks/databricks/1.31.1/windows_amd64/terraform-provider-databricks_v1.31.1.exe
2023-12-09T10:57:57.234+0100 [DEBUG] provider.terraform-provider-databricks_v1.31.1.exe: Databricks Terraform Provider
2023-12-09T10:57:57.235+0100 [DEBUG] provider.terraform-provider-databricks_v1.31.1.exe: 
2023-12-09T10:57:57.235+0100 [DEBUG] provider.terraform-provider-databricks_v1.31.1.exe: Version 1.31.1
2023-12-09T10:57:57.235+0100 [DEBUG] provider.terraform-provider-databricks_v1.31.1.exe:
2023-12-09T10:57:57.235+0100 [DEBUG] provider.terraform-provider-databricks_v1.31.1.exe: https://registry.terraform.io/providers/databricks/databricks/latest/docs
2023-12-09T10:57:57.235+0100 [DEBUG] provider.terraform-provider-databricks_v1.31.1.exe:
2023-12-09T10:57:57.236+0100 [INFO]  provider.terraform-provider-databricks_v1.31.1.exe: configuring server automatic mTLS: timestamp=2023-12-09T10:57:57.236+0100
2023-12-09T10:57:57.278+0100 [DEBUG] provider: using plugin: version=5
2023-12-09T10:57:57.278+0100 [DEBUG] provider.terraform-provider-databricks_v1.31.1.exe: plugin address: address=127.0.0.1:10000 network=tcp timestamp=2023-12-09T10:57:57.277+0100
2023-12-09T10:57:57.315+0100 [WARN]  ValidateProviderConfig from "provider[\"registry.terraform.io/databricks/databricks\"].workspace" changed the config value, but that value is unused
2023-12-09T10:57:57.316+0100 [INFO]  provider.terraform-provider-databricks_v1.31.1.exe: Explicit and implicit attributes: azure_client_id, azure_client_secret, azure_tenant_id, host: @module=databricks tf_provider_addr=registry.ter
raform.io/databricks/databricks tf_req_id=07610cfc-4738-5ef9-299a-46d0698ca4fb tf_rpc=Configure @caller=/home/runner/work/terraform-provider-databricks/terraform-provider-databricks/provider/provider.go:236 timestamp=2023-12-09T10:5
7:57.316+0100
2023-12-09T10:57:57.743+0100 [WARN]  Provider "registry.terraform.io/databricks/databricks" produced an invalid plan for databricks_storage_credential.this, but we are tolerating it because it is using the legacy plugin SDK.
    The following problems may be the cause of any confusing errors from downstream operations:
      - .databricks_gcp_service_account: attribute representing nested block must not be unknown itself; set nested attribute values to unknown instead
databricks_storage_credential.this: Creating...
2023-12-09T10:57:57.744+0100 [INFO]  Starting apply for databricks_storage_credential.this
2023-12-09T10:57:57.744+0100 [DEBUG] databricks_storage_credential.this: applying the planned Create change
2023-12-09T10:57:57.745+0100 [DEBUG] provider.terraform-provider-databricks_v1.31.1.exe: setting computed for "databricks_gcp_service_account" from ComputedKeys: timestamp=2023-12-09T10:57:57.745+0100
2023-12-09T10:57:59.045+0100 [DEBUG] provider.terraform-provider-databricks_v1.31.1.exe: non-retriable error: : @caller=/home/runner/work/terraform-provider-databricks/terraform-provider-databricks/logger/logger.go:33 tf_req_id=967e
366a-0d1d-0772-9a81-e0c9fb0369eb tf_rpc=ApplyResourceChange @module=databricks tf_provider_addr=registry.terraform.io/databricks/databricks tf_resource_type=databricks_storage_credential timestamp=2023-12-09T10:57:59.045+0100       
2023-12-09T10:57:59.045+0100 [DEBUG] provider.terraform-provider-databricks_v1.31.1.exe: POST /api/2.1/unity-catalog/storage-credentials
> {
>   "azure_managed_identity": {
>     "access_connector_id": "/subscriptions/3e4ab56a-21ee-4c7f-b438-6f831990ff7f/resourceGroups/kg-demo-dev/providers/Microso... (47 more bytes)"
>   },
>   "name": "test-credential"
> }
< HTTP/2.0 500 Internal Server Error
< {
<   "details": [
<     {
<       "@type": "type.googleapis.com/google.rpc.RequestInfo",
<       "request_id": "bbd4d5dd-0fe7-4009-bdca-745f11125c69",
<       "serving_data": ""
<     }
<   ],
<   "error_code": "INTERNAL_ERROR",
<   "message": ""
< }: @caller=/home/runner/work/terraform-provider-databricks/terraform-provider-databricks/logger/logger.go:33 tf_provider_addr=registry.terraform.io/databricks/databricks tf_rpc=ApplyResourceChange @module=databricks tf_req_id=967e
366a-0d1d-0772-9a81-e0c9fb0369eb tf_resource_type=databricks_storage_credential timestamp=2023-12-09T10:57:59.045+0100
2023-12-09T10:57:59.048+0100 [ERROR] provider.terraform-provider-databricks_v1.31.1.exe: Response contains error diagnostic: @module=sdk.proto tf_provider_addr=registry.terraform.io/databricks/databricks tf_req_id=967e366a-0d1d-0772
-9a81-e0c9fb0369eb tf_rpc=ApplyResourceChange @caller=/home/runner/work/terraform-provider-databricks/terraform-provider-databricks/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/diag/diagnostics.go:58 diagnostic
_detail= diagnostic_severity=ERROR diagnostic_summary="cannot create storage credential: " tf_proto_version=5.4 tf_resource_type=databricks_storage_credential timestamp=2023-12-09T10:57:59.045+0100
2023-12-09T10:57:59.050+0100 [ERROR] vertex "databricks_storage_credential.this" error: cannot create storage credential:
╷
│ Error: cannot create storage credential: 
│
│   with databricks_storage_credential.this,
│   on main.tf line 160, in resource "databricks_storage_credential" "this":
│  160: resource "databricks_storage_credential" "this" {
│
╵
2023-12-09T10:57:59.055+0100 [DEBUG] provider.stdio: received EOF, stopping recv loop: err="rpc error: code = Unavailable desc = error reading from server: EOF"
2023-12-09T10:57:59.079+0100 [DEBUG] provider: plugin process exited: path=.terraform/providers/registry.terraform.io/databricks/databricks/1.31.1/windows_amd64/terraform-provider-databricks_v1.31.1.exe pid=17676
2023-12-09T10:57:59.079+0100 [DEBUG] provider: plugin exited

Account provider logs

2023-12-09T10:59:13.457+0100 [INFO]  backend/local: apply calling Apply
2023-12-09T10:59:13.458+0100 [DEBUG] Building and walking apply graph for NormalMode plan
2023-12-09T10:59:13.458+0100 [DEBUG] Resource state not found for node "databricks_storage_credential.this", instance databricks_storage_credential.this
2023-12-09T10:59:13.458+0100 [DEBUG] ProviderTransformer: "databricks_storage_credential.this (expand)" (*terraform.nodeExpandApplyableResource) needs provider["registry.terraform.io/databricks/databricks"].account
2023-12-09T10:59:13.460+0100 [DEBUG] ProviderTransformer: "databricks_storage_credential.this" (*terraform.NodeApplyableResourceInstance) needs provider["registry.terraform.io/databricks/databricks"].account
2023-12-09T10:59:13.460+0100 [DEBUG] pruning unused provider["registry.terraform.io/azure/azapi"]
2023-12-09T10:59:13.460+0100 [DEBUG] pruning unused provider["registry.terraform.io/databricks/databricks"].workspace
2023-12-09T10:59:13.460+0100 [DEBUG] pruning unused provider["registry.terraform.io/databricks/databricks"]
2023-12-09T10:59:13.461+0100 [DEBUG] pruning unused provider["registry.terraform.io/hashicorp/azurerm"]
2023-12-09T10:59:13.461+0100 [DEBUG] ReferenceTransformer: "provider[\"registry.terraform.io/databricks/databricks\"].account" references: []
2023-12-09T10:59:13.461+0100 [DEBUG] ReferenceTransformer: "databricks_storage_credential.this (expand)" references: []
2023-12-09T10:59:13.462+0100 [DEBUG] ReferenceTransformer: "databricks_storage_credential.this" references: []
2023-12-09T10:59:13.463+0100 [DEBUG] Starting graph walk: walkApply
2023-12-09T10:59:13.463+0100 [DEBUG] created provider logger: level=debug
2023-12-09T10:59:13.464+0100 [INFO]  provider: configuring client automatic mTLS
2023-12-09T10:59:13.477+0100 [DEBUG] provider: starting plugin: path=.terraform/providers/registry.terraform.io/databricks/databricks/1.31.1/windows_amd64/terraform-provider-databricks_v1.31.1.exe args=[.terraform/providers/registry
.terraform.io/databricks/databricks/1.31.1/windows_amd64/terraform-provider-databricks_v1.31.1.exe]
2023-12-09T10:59:13.483+0100 [DEBUG] provider: plugin started: path=.terraform/providers/registry.terraform.io/databricks/databricks/1.31.1/windows_amd64/terraform-provider-databricks_v1.31.1.exe pid=3608
2023-12-09T10:59:13.483+0100 [DEBUG] provider: waiting for RPC address: path=.terraform/providers/registry.terraform.io/databricks/databricks/1.31.1/windows_amd64/terraform-provider-databricks_v1.31.1.exe
2023-12-09T10:59:14.106+0100 [DEBUG] provider.terraform-provider-databricks_v1.31.1.exe: Databricks Terraform Provider
2023-12-09T10:59:14.106+0100 [DEBUG] provider.terraform-provider-databricks_v1.31.1.exe: 
2023-12-09T10:59:14.106+0100 [DEBUG] provider.terraform-provider-databricks_v1.31.1.exe: Version 1.31.1
2023-12-09T10:59:14.106+0100 [DEBUG] provider.terraform-provider-databricks_v1.31.1.exe:
2023-12-09T10:59:14.107+0100 [DEBUG] provider.terraform-provider-databricks_v1.31.1.exe: https://registry.terraform.io/providers/databricks/databricks/latest/docs
2023-12-09T10:59:14.107+0100 [DEBUG] provider.terraform-provider-databricks_v1.31.1.exe:
2023-12-09T10:59:14.108+0100 [INFO]  provider.terraform-provider-databricks_v1.31.1.exe: configuring server automatic mTLS: timestamp=2023-12-09T10:59:14.107+0100
2023-12-09T10:59:14.144+0100 [DEBUG] provider: using plugin: version=5
2023-12-09T10:59:14.144+0100 [DEBUG] provider.terraform-provider-databricks_v1.31.1.exe: plugin address: address=127.0.0.1:10000 network=tcp timestamp=2023-12-09T10:59:14.144+0100
2023-12-09T10:59:14.183+0100 [WARN]  ValidateProviderConfig from "provider[\"registry.terraform.io/databricks/databricks\"].account" changed the config value, but that value is unused
2023-12-09T10:59:14.184+0100 [INFO]  provider.terraform-provider-databricks_v1.31.1.exe: Explicit and implicit attributes: account_id, azure_client_id, azure_client_secret, azure_tenant_id, host: @module=databricks tf_provider_addr=
registry.terraform.io/databricks/databricks tf_req_id=d4aa39e5-308a-cdc0-178e-456a1efff894 tf_rpc=Configure @caller=/home/runner/work/terraform-provider-databricks/terraform-provider-databricks/provider/provider.go:236 timestamp=202
3-12-09T10:59:14.184+0100
2023-12-09T10:59:14.594+0100 [WARN]  Provider "registry.terraform.io/databricks/databricks" produced an invalid plan for databricks_storage_credential.this, but we are tolerating it because it is using the legacy plugin SDK.
    The following problems may be the cause of any confusing errors from downstream operations:
      - .databricks_gcp_service_account: attribute representing nested block must not be unknown itself; set nested attribute values to unknown instead
databricks_storage_credential.this: Creating...
2023-12-09T10:59:14.595+0100 [INFO]  Starting apply for databricks_storage_credential.this
2023-12-09T10:59:14.595+0100 [DEBUG] databricks_storage_credential.this: applying the planned Create change
2023-12-09T10:59:14.597+0100 [DEBUG] provider.terraform-provider-databricks_v1.31.1.exe: setting computed for "databricks_gcp_service_account" from ComputedKeys: timestamp=2023-12-09T10:59:14.596+0100
2023-12-09T10:59:15.109+0100 [DEBUG] provider.terraform-provider-databricks_v1.31.1.exe: non-retriable error: Response from server (403 Forbidden) : unexpected end of JSON input: @caller=/home/runner/work/terraform-provider-databric
ks/terraform-provider-databricks/logger/logger.go:33 @module=databricks tf_provider_addr=registry.terraform.io/databricks/databricks tf_req_id=7d7bb3e6-ef56-4f87-b10c-1bf744c20a99 tf_resource_type=databricks_storage_credential tf_rp
c=ApplyResourceChange timestamp=2023-12-09T10:59:15.108+0100
2023-12-09T10:59:15.109+0100 [DEBUG] provider.terraform-provider-databricks_v1.31.1.exe: POST /api/2.0/accounts/7c5689d0-46de-4612-9cc0-d27123ec8bfe/metastores/891846a4-ff3b-487c-a852-daa0f73f1aab/storage-credentials
> {
>   "credential_info": {
>     "azure_managed_identity": {
>       "access_connector_id": "/subscriptions/3e4ab56a-21ee-4c7f-b438-6f831990ff7f/resourceGroups/kg-demo-dev/providers/Microso... (47 more bytes)"
>     },
>     "name": "test-credential"
>   }
> }
< HTTP/2.0 403 Forbidden (Error: Response from server (403 Forbidden) : unexpected end of JSON input)
: @caller=/home/runner/work/terraform-provider-databricks/terraform-provider-databricks/logger/logger.go:33 @module=databricks tf_provider_addr=registry.terraform.io/databricks/databricks tf_rpc=ApplyResourceChange tf_req_id=7d7bb3e
6-ef56-4f87-b10c-1bf744c20a99 tf_resource_type=databricks_storage_credential timestamp=2023-12-09T10:59:15.108+0100
2023-12-09T10:59:15.111+0100 [ERROR] provider.terraform-provider-databricks_v1.31.1.exe: Response contains error diagnostic: @caller=/home/runner/work/terraform-provider-databricks/terraform-provider-databricks/vendor/github.com/has
hicorp/terraform-plugin-go/tfprotov5/internal/diag/diagnostics.go:58 diagnostic_detail= diagnostic_severity=ERROR diagnostic_summary="cannot create storage credential: Response from server (403 Forbidden) : unexpected end of JSON in
put" tf_proto_version=5.4 tf_provider_addr=registry.terraform.io/databricks/databricks tf_resource_type=databricks_storage_credential tf_rpc=ApplyResourceChange @module=sdk.proto tf_req_id=7d7bb3e6-ef56-4f87-b10c-1bf744c20a99 timest
amp=2023-12-09T10:59:15.109+0100
2023-12-09T10:59:15.112+0100 [ERROR] vertex "databricks_storage_credential.this" error: cannot create storage credential: Response from server (403 Forbidden) : unexpected end of JSON input
╷
│ Error: cannot create storage credential: Response from server (403 Forbidden) : unexpected end of JSON input
│
│   with databricks_storage_credential.this,
│   on main.tf line 160, in resource "databricks_storage_credential" "this":
│  160: resource "databricks_storage_credential" "this" {
│
╵
2023-12-09T10:59:15.118+0100 [DEBUG] provider.stdio: received EOF, stopping recv loop: err="rpc error: code = Unavailable desc = error reading from server: EOF"
2023-12-09T10:59:15.146+0100 [DEBUG] provider: plugin process exited: path=.terraform/providers/registry.terraform.io/databricks/databricks/1.31.1/windows_amd64/terraform-provider-databricks_v1.31.1.exe pid=3608
2023-12-09T10:59:15.146+0100 [DEBUG] provider: plugin exited

guderkar avatar Dec 09 '23 10:12 guderkar

We are seeing the same error. However this is not a terraform provider specific error since the same issues arises when using the databricks CLI.

DPGrev avatar Dec 21 '23 16:12 DPGrev

@guderkar @DPGrev just want to check if you're still seeing this error?

nkvuong avatar Feb 01 '24 09:02 nkvuong

Yes still the same error. Tested while ago with provider version 1.35.0

guderkar avatar Feb 01 '24 10:02 guderkar

Sorry acutally there is one difference.

For account level provider I'm now getting same error as for worskpace level provider.

BEFORE image

NOW image

guderkar avatar Feb 01 '24 11:02 guderkar

We are having the same issue when trying to set up managed storage for a workspace using terraform.

Our service principal uses a workspace level provider. The service principal has Create Storage Credential permissions on the unity catalog metastore attached to our workspace. The service principal is an admin in the workspace with all entitlements. The service principal is also a Contributor on the Access Connector for Azure Databricks.

Yet when creating the storage credential we we see this error image

A user with with the same permissions as mentioned above is able to create the storage credential via the workspace portal.

provider configuration

provider "databricks" {
  host                        = "workspaceUrl"
  azure_workspace_resource_id = "workspaceId"
}

storage credential resource configuration

resource "azurerm_storage_account" "ext_storage" {
  name                      = var.azurerm_storage_account_name
  resource_group_name       = data.azurerm_resource_group.rg.name
  location                  = data.azurerm_resource_group.rg.location
  tags                      = data.azurerm_resource_group.rg.tags
  account_tier              = "Standard"
  account_replication_type  = "GRS"
  is_hns_enabled            = true
  shared_access_key_enabled = false
}

resource "azurerm_storage_container" "ext_storage" {
  name                  = var.azurerm_storage_container_name
  storage_account_name  = azurerm_storage_account.ext_storage.name
  container_access_type = "private"
}

resource "azurerm_role_assignment" "ext_storage" {
  scope                = azurerm_storage_account.ext_storage.id
  role_definition_name = "Storage Blob Data Contributor"
  principal_id         = azurerm_databricks_access_connector.ext_access_connector.identity[0].principal_id
}

resource "databricks_storage_credential" "external" {
  name = azurerm_databricks_access_connector.ext_access_connector.name
  azure_managed_identity {
    access_connector_id = azurerm_databricks_access_connector.ext_access_connector.id
  }
  comment = "Managed by TF"
}

We would appreciate any feedback.

shoopgates avatar Feb 08 '24 21:02 shoopgates

Update: As @guderkar mentioned in their comment above, this issue is resolved when assigning the service principal to the databricks Account Admin role. This fix works even though we are still using the workspace level databricks provider.

Assigning service principals the role of Account Admin is not ideal due to its excessively elevated privileges.

~~Update 2: This issue is also resolved when assigning the service principal to the databricks Metastore Admin role, for the metastore. But even when giving the service principal all available metastore permissions, we encounter the error. So it seems they have to be Metastore Admin~~. Cannot reproduce this, possibly was caused from slow role propagation out of account admin.

shoopgates avatar Feb 09 '24 18:02 shoopgates

I experience the same problem while trying to update an existing storage credential (created manually and then imported) using terraform provider with an Azure EntraID Service Principal with Metastore Admin permissions.

I checked that Service Principal that I use can update storage credentials using request towards workspace Rest API endpoint.

The problem is in JSON deserialization (see diagnostic logs below) from server side: either terraform provider itself or the underlying databricks-sdk-go library creates invalid payload.

Logging output of terraform:

2024-02-13T11:25:34.473+0100 [TRACE] provider.terraform-provider-databricks_v1.36.2: Received request: tf_provider_addr=registry.terraform.io/databricks/databricks tf_req_id=2915953a-c502-35a2-7f1d-577b4d4da754 tf_resource_type=databricks_storage_credential tf_rpc=ApplyResourceChange @module=sdk.proto tf_proto_version=5.4 @caller=/home/runner/work/terraform-provider-databricks/terraform-provider-databricks/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/tf5server/server.go:846 timestamp="2024-02-13T11:25:34.473+0100"
2024-02-13T11:25:34.473+0100 [TRACE] provider.terraform-provider-databricks_v1.36.2: Sending request downstream: tf_provider_addr=registry.terraform.io/databricks/databricks tf_req_id=2915953a-c502-35a2-7f1d-577b4d4da754 tf_resource_type=databricks_storage_credential tf_rpc=ApplyResourceChange @module=sdk.proto tf_proto_version=5.4 @caller=/home/runner/work/terraform-provider-databricks/terraform-provider-databricks/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tf5serverlogging/downstream_request.go:20 timestamp="2024-02-13T11:25:34.473+0100"
2024-02-13T11:25:34.474+0100 [TRACE] provider.terraform-provider-databricks_v1.36.2: Calling downstream: tf_resource_type=databricks_storage_credential @module=sdk.helper_schema tf_req_id=2915953a-c502-35a2-7f1d-577b4d4da754 tf_rpc=ApplyResourceChange @caller=/home/runner/work/terraform-provider-databricks/terraform-provider-databricks/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/resource.go:918 tf_provider_addr=registry.terraform.io/databricks/databricks timestamp="2024-02-13T11:25:34.474+0100"
2024-02-13T11:25:34.924+0100 [DEBUG] provider.terraform-provider-databricks_v1.36.2: GET /api/2.1/unity-catalog/current-metastore-assignment
< HTTP/2.0 200 OK
< {
<   "default_catalog_name": "hive_metastore",
<   "metastore_id": "0000000-1111-2222-3333-55555555555",
<   "workspace_id": 1234567890123456
< }: @caller=/home/runner/work/terraform-provider-databricks/terraform-provider-databricks/logger/logger.go:33 tf_provider_addr=registry.terraform.io/databricks/databricks tf_req_id=2915953a-c502-35a2-7f1d-577b4d4da754 tf_resource_type=databricks_storage_credential @module=databricks tf_rpc=ApplyResourceChange timestamp="2024-02-13T11:25:34.924+0100"
2024-02-13T11:25:35.605+0100 [DEBUG] provider.terraform-provider-databricks_v1.36.2: non-retriable error: : @module=databricks tf_req_id=2915953a-c502-35a2-7f1d-577b4d4da754 tf_provider_addr=registry.terraform.io/databricks/databricks tf_resource_type=databricks_storage_credential tf_rpc=ApplyResourceChange @caller=/home/runner/work/terraform-provider-databricks/terraform-provider-databricks/logger/logger.go:33 timestamp="2024-02-13T11:25:35.605+0100"
2024-02-13T11:25:35.605+0100 [DEBUG] provider.terraform-provider-databricks_v1.36.2: PATCH /api/2.1/unity-catalog/storage-credentials/myunitycatdev
> {
>   "azure_managed_identity": {
>     "access_connector_id": "/subscriptions/12345678-90ab-cdef-1234-567890abcdef/resourceGroups/my-resource-group/providers/Micros... (68 more bytes)",
>     "managed_identity_id": "/subscriptions/12345678-90ab-cdef-1234-567890abcdef/resourcegroups/my-resource-group/providers/Micros... (79 more bytes)"
>   },
>   "comment": "Storage credential for 'myunitycat'. Managed by terraform"
> }
< HTTP/2.0 500 Internal Server Error
< {
<   "details": [
<     {
<       "@type": "type.googleapis.com/google.rpc.RequestInfo",
<       "request_id": "49131493-867d-4080-abc0-7abec61c059b",
<       "serving_data": ""
<     }
<   ],
<   "error_code": "INTERNAL_ERROR",
<   "message": ""
< }: @module=databricks tf_req_id=2915953a-c502-35a2-7f1d-577b4d4da754 tf_rpc=ApplyResourceChange @caller=/home/runner/work/terraform-provider-databricks/terraform-provider-databricks/logger/logger.go:33 tf_provider_addr=registry.terraform.io/databricks/databricks tf_resource_type=databricks_storage_credential timestamp="2024-02-13T11:25:35.605+0100"
2024-02-13T11:25:35.605+0100 [TRACE] provider.terraform-provider-databricks_v1.36.2: Called downstream: @module=sdk.helper_schema @caller=/home/runner/work/terraform-provider-databricks/terraform-provider-databricks/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/resource.go:920 tf_provider_addr=registry.terraform.io/databricks/databricks tf_req_id=2915953a-c502-35a2-7f1d-577b4d4da754 tf_resource_type=databricks_storage_credential tf_rpc=ApplyResourceChange timestamp="2024-02-13T11:25:35.605+0100"
2024-02-13T11:25:35.605+0100 [TRACE] provider.terraform-provider-databricks_v1.36.2: Received downstream response: @caller=/home/runner/work/terraform-provider-databricks/terraform-provider-databricks/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tf5serverlogging/downstream_request.go:40 diagnostic_warning_count=0 tf_provider_addr=registry.terraform.io/databricks/databricks tf_req_duration_ms=1131 tf_req_id=2915953a-c502-35a2-7f1d-577b4d4da754 @module=sdk.proto diagnostic_error_count=1 tf_resource_type=databricks_storage_credential tf_rpc=ApplyResourceChange tf_proto_version=5.4 timestamp="2024-02-13T11:25:35.605+0100"
2024-02-13T11:25:35.605+0100 [ERROR] provider.terraform-provider-databricks_v1.36.2: Response contains error diagnostic: tf_provider_addr=registry.terraform.io/databricks/databricks diagnostic_detail="" tf_rpc=ApplyResourceChange @module=sdk.proto diagnostic_severity=ERROR tf_req_id=2915953a-c502-35a2-7f1d-577b4d4da754 tf_resource_type=databricks_storage_credential @caller=/home/runner/work/terraform-provider-databricks/terraform-provider-databricks/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/diag/diagnostics.go:62 diagnostic_summary="cannot update storage credential: " tf_proto_version=5.4 timestamp="2024-02-13T11:25:35.605+0100"

Unity Catalog diagnostic logs:

  • Failed attempt using terraform
     {
         "records": [
             {
                 "resourceId": "/SUBSCRIPTIONS/12345678-90ab-cdef-1234-567890abcdef/RESOURCEGROUPS/MY-RESOURCE-GROUP/PROVIDERS/MICROSOFT.DATABRICKS/WORKSPACES/MY-DATABRICKS-DEV",
                 "operationVersion": "1.0.0",
                 "identity": "{\"email\":\"abcedf01-2345-6789-0abc-def012345678\",\"subjectName\":null}",
                 "operationName": "Microsoft.Databricks/unityCatalog/updateStorageCredential",
                 "time": "2024-02-13T10:01:53Z",
                 "category": "unityCatalog",
                 "properties": {
                     "sourceIPAddress": "11.222.333.444",
                     "logId": "67ce1ca2-b76f-3037-b8e3-1c694afadd47",
                     "serviceName": "unityCatalog",
                     "userAgent": "databricks-tf-provider/1.36.2 databricks-sdk-go/0.30.0 go/1.21.6 os/darwin terraform/1.6.6 resource/storage_credential auth/azure-client-secret",
                     "response": "{\"statusCode\":500,\"errorMessage\":\"JsonParseException: Unrecognized token 'REDACTED_JSON_WEB_TOKEN': was expecting ('true', 'false' or 'null')\\n at [Source: REDACTED_JSON_WEB_TOKEN_CHMWzhZ7QWztcutuZxbvsqxPTldulaxcO2oNHdDeUSa1IHl3V4dxjZYh3pl7OQcZR11YGM0sFvoQ1sTb--b4U50SIKsDexz7_CU4pBpUCtl_PppUomxCUlKShTunW4KMZBw5y6to6o5zawhsvw23W6fJFLGR3h8pPoXaCViWwlzk8ftxHn1nqx0Qhax-ft0kuso1kRV0sHBghZZSAYvtH6t4oNO8oAbOru1ww5yCwdFGk9kdVhHwTnqFDTyrQWVkC0KDrVv0xPrDQe4Af3ZfQQGgLDzgD5DnlBOX5TqzRZ3fAgs5DsFqrEpY4euxHD4HwwFKh8wh9oUOlgQ; line: 1, column: 133]\"}",
                     "sessionId": "ephemeral-873360bc-a295-4f56-9c6c-9d513b45af69",
                     "actionName": "updateStorageCredential",
                     "requestId": "5335ecb9-c9c6-44f4-a829-adc62187d928",
                     "requestParams": "{\"name_arg\":\"myunitycatdev\",\"azure_managed_identity\":\"{\\\"access_connector_id\\\":\\\"/subscriptions/12345678-90ab-cdef-1234-567890abcdef/resourceGroups/my-resource-group/providers/Microsoft.Databricks/accessConnectors/myunitycat-dev\\\",\\\"managed_identity_id\\\":\\\"/subscriptions/12345678-90ab-cdef-1234-567890abcdef/resourcegroups/my-resource-group/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myunitycat-dev\\\"}\",\"comment\":\"Storage credential for 'myunitycat'. Managed by terraform\",\"workspace_id\":\"1234567890123456\",\"metastore_id\":\"0000000-1111-2222-3333-55555555555\"}"
                 },
                 "Host": "0212-224112-j8zo3ql5-10-139-74-0"
             }
         ]
     }
    
  • Successful attempt using curl
     {
         "records": [
             {
                 "resourceId": "/SUBSCRIPTIONS/12345678-90ab-cdef-1234-567890abcdef/RESOURCEGROUPS/MY-RESOURCE-GROUP/PROVIDERS/MICROSOFT.DATABRICKS/WORKSPACES/MY-DATABRICKS-DEV",
                 "operationVersion": "1.0.0",
                 "identity": "{\"email\":\"abcedf01-2345-6789-0abc-def012345678\",\"subjectName\":null}",
                 "operationName": "Microsoft.Databricks/unityCatalog/updateStorageCredential",
                 "time": "2024-02-12T15:46:38Z",
                 "category": "unityCatalog",
                 "properties": {
                     "sourceIPAddress": "11.222.333.444",
                     "logId": "cc825a64-e2c2-30e1-b32d-beb48842ec3a",
                     "serviceName": "unityCatalog",
                     "userAgent": "curl/8.4.0",
                     "response": "{\"statusCode\":200}",
                     "sessionId": "ephemeral-e957fb4c-83b5-4832-a484-0c723ecea935",
                     "actionName": "updateStorageCredential",
                     "requestId": "0292003b-d293-4cc5-b4ed-06ed70b0f0ef",
                     "requestParams": "{\"name_arg\":\"myunitycatdev\",\"owner\":\"abcedf01-2345-6789-0abc-def012345678\",\"workspace_id\":\"1234567890123456\",\"metastore_id\":\"0000000-1111-2222-3333-55555555555\"}"
                 },
                 "Host": "0208-234324-6qmsxk9r-10-139-94-2"
             },
             {
                 "resourceId": "/SUBSCRIPTIONS/12345678-90ab-cdef-1234-567890abcdef/RESOURCEGROUPS/MY-RESOURCE-GROUP/PROVIDERS/MICROSOFT.DATABRICKS/WORKSPACES/MY-DATABRICKS-DEV",
                 "operationVersion": "1.0.0",
                 "identity": "{\"email\":\"abcedf01-2345-6789-0abc-def012345678\",\"subjectName\":null}",
                 "operationName": "Microsoft.Databricks/unityCatalog/getCatalog",
                 "time": "2024-02-12T15:48:24Z",
                 "category": "unityCatalog",
                 "properties": {
                     "sourceIPAddress": "11.222.333.444",
                     "logId": "e9cd0ff6-152f-3786-9d63-379a9badd6a8",
                     "serviceName": "unityCatalog",
                     "userAgent": "databricks-tf-provider/1.35.0 databricks-sdk-go/0.30.0 go/1.21.6 os/darwin terraform/1.6.6 resource/catalog auth/azure-client-secret",
                     "response": "{\"statusCode\":200}",
                     "sessionId": "ephemeral-3384f69a-1a07-46b0-9f9f-15ddeb987a3c",
                     "actionName": "getCatalog",
                     "requestId": "00d982df-26d2-405b-8d74-7336ef450693",
                     "requestParams": "{\"name_arg\":\"myunitycatdev\",\"workspace_id\":\"1234567890123456\",\"metastore_id\":\"0000000-1111-2222-3333-55555555555\"}"
                 },
                 "Host": "0208-234324-6qmsxk9r-10-139-94-2"
             },
             {
                 "resourceId": "/SUBSCRIPTIONS/12345678-90ab-cdef-1234-567890abcdef/RESOURCEGROUPS/MY-RESOURCE-GROUP/PROVIDERS/MICROSOFT.DATABRICKS/WORKSPACES/MY-DATABRICKS-DEV",
                 "operationVersion": "1.0.0",
                 "identity": "{\"email\":\"abcedf01-2345-6789-0abc-def012345678\",\"subjectName\":null}",
                 "operationName": "Microsoft.Databricks/unityCatalog/getPermissions",
                 "time": "2024-02-12T15:48:23Z",
                 "category": "unityCatalog",
                 "properties": {
                     "sourceIPAddress": "11.222.333.444",
                     "logId": "756aeba2-2df8-3e30-89f5-de4909a28e3a",
                     "serviceName": "unityCatalog",
                     "userAgent": "databricks-tf-provider/1.35.0 databricks-sdk-go/0.30.0 go/1.21.6 os/darwin terraform/1.6.6 resource/grants auth/azure-client-secret",
                     "response": "{\"statusCode\":200}",
                     "sessionId": "ephemeral-a5ab683f-4a17-4b7c-9cf5-6b308132be20",
                     "actionName": "getPermissions",
                     "requestId": "a30f3fab-e990-469a-820a-9f4bcdb5dc66",
                     "requestParams": "{\"securable_type\":\"storage_credential\",\"securable_full_name\":\"myunitycatdev\",\"workspace_id\":\"1234567890123456\",\"metastore_id\":\"0000000-1111-2222-3333-55555555555\"}"
                 },
                 "Host": "0208-234324-6qmsxk9r-10-139-94-2"
             }
         ]
     }
    

mixam24 avatar Feb 13 '24 11:02 mixam24

Hello @nkvuong,

Could you please have a look at comment that I left above: Link

Any feedback is highly appreciated :)

Thank you in advance!

mixam24 avatar Feb 20 '24 19:02 mixam24

@mixam24 @shoopgates @guderkar apologies for the delay in getting back, I had to liaise with our engineering team to confirm the current behaviour, and it turns out that storage credential creation permissions cannot be delegated to service principals - the creation will fail with various errors as you have observed in this thread.

In summary:

  • Users can create storage credential with delegated permission, if they are owner/contributor of the access connector
  • SPNs can only create storage credential with full account admin permission.

I have also requested our documentation to be updated with the limitations accordingly.

nkvuong avatar Feb 27 '24 12:02 nkvuong

That's unfortunate, are there any plans to support it in the future?

guderkar avatar Feb 28 '24 08:02 guderkar

Hello Team, customer is looking for a workaround for this issue.

RamaGanireddy avatar Mar 01 '24 09:03 RamaGanireddy