terraform-provider-databricks
terraform-provider-databricks copied to clipboard
[ISSUE] Issue with `databricks_entitlements` resource. Cannot assign entitlements to account group using account-level provider
We cannot assign workspace level entitlements to a group in an Azure Databricks Workspace using the databricks_entitlements resource. This seemed to have been working less than a month ago.
Configuration
data "databricks_group" "account_group" {
for_each = var.account_groups
display_name = each.key
}
resource "databricks_mws_permission_assignment" "add_account_group" {
for_each = data.databricks_group.account_group
workspace_id = var.databricks_workspace_id
principal_id = each.value.id
permissions = ["USER"]
}
resource "databricks_entitlements" "entitlements" {
for_each = data.databricks_group.account_group
group_id = each.value.id
allow_cluster_create = var.account_groups[each.key].allow_cluster_create
allow_instance_pool_create = var.account_groups[each.key].allow_instance_pool_create
workspace_access = var.account_groups[each.key].workspace_access
databricks_sql_access = var.account_groups[each.key].databricks_sql_access
}
Expected Behavior
Databricks entitlements are assigned to referenced account groups.
Actual Behavior
Plan and apply succeed, but no updates are actually made.
Steps to Reproduce
- Create identity federation enabled workspace.
- Create account group.
- With account level provider, add account group to workspace and assign it entitlements via the databricks_entitlements resource.
Terraform and provider versions
Is it a regression?
It did work in the past when we were using >= 1.36.1. Testing for 1.36.1 and latest version (1.38.0) both show this behavior.
Debug Output
Debug logs show a 400 response
2024-03-07T22:31:37.317Z [DEBUG] State storage *cloud.State declined to persist a state snapshot
2024-03-07T22:31:38.862Z [DEBUG] provider.terraform-provider-databricks_v1.38.0: non-retriable error: invalidPath No such attribute with the name : entitlements in the current resource: tf_resource_type=databricks_entitlements @caller=/home/runner/work/terraform-provider-databricks/terraform-provider-databricks/logger/logger.go:33 @module=databricks tf_provider_addr=registry.terraform.io/databricks/databricks tf_req_id=b0cd5981-d8d9-ba9d-9be5-b823aee40f97 tf_rpc=ApplyResourceChange timestamp=2024-03-07T22:31:38.862Z
2024-03-07T22:31:38.862Z [DEBUG] provider.terraform-provider-databricks_v1.38.0: PATCH /api/2.0/accounts/<id>/scim/v2/Groups/<id>
> {
> "Operations": [
> {
> "op": "remove",
> "path": "entitlements",
> "value": [
> {
> "value": "allow-cluster-create"
> },
> {
> "value": "allow-instance-pool-create"
> },
> {
> "value": "databricks-sql-access"
> },
> {
> "value": "workspace-access"
> }
> ]
> },
> {
> "op": "add",
> "path": "entitlements",
> "value": [
> {
> "value": "databricks-sql-access"
> },
> {
> "value": "workspace-access"
> }
> ]
> }
> ],
> "schemas": [
> "urn:ietf:params:scim:api:messages:2.0:PatchOp"
> ]
> }
< HTTP/2.0 400 Bad Request
< {
< "detail": "No such attribute with the name : entitlements in the current resource",
< "schemas": [
< "urn:ietf:params:scim:api:messages:2.0:Error"
< ],
< "scimType": "invalidPath",
< "status": "400"
< }: @module=databricks tf_req_id=b0cd5981-d8d9-ba9d-9be5-b823aee40f97 tf_rpc=ApplyResourceChange @caller=/home/runner/work/terraform-provider-databricks/terraform-provider-databricks/logger/logger.go:33 tf_provider_addr=registry.terraform.io/databricks/databricks tf_resource_type=databricks_entitlements timestamp=2024-03-07T22:31:38.862Z
2024-03-07T22:31:38.982Z [DEBUG] provider.terraform-provider-databricks_v1.38.0: GET /api/2.0/accounts/<id>/scim/v2/Groups/<id>?attributes=entitlements
< HTTP/2.0 200 OK
< {
< "id": "<id>",
< "schemas": [
< "urn:ietf:params:scim:schemas:core:2.0:Group"
< ]
< }: tf_rpc=ApplyResourceChange @caller=/home/runner/work/terraform-provider-databricks/terraform-provider-databricks/logger/logger.go:33 @module=databricks tf_provider_addr=registry.terraform.io/databricks/databricks tf_req_id=b0cd5981-d8d9-ba9d-9be5-b823aee40f97 tf_resource_type=databricks_entitlements timestamp=2024-03-07T22:31:38.982Z
2024-03-07T22:31:38.983Z [WARN] Provider "provider[\"registry.terraform.io/databricks/databricks\"].account" produced an unexpected new value for module.ead_dbricks_account_groups_assignment_prod_eus.databricks_entitlements.entitlements["group_name"], but we are tolerating it because it is using the legacy plugin SDK.
The following problems may be the cause of any confusing errors from downstream operations:
- .workspace_access: was cty.True, but now cty.False
- .databricks_sql_access: was cty.True, but now cty.False
2024-03-07T22:31:38.983Z [DEBUG] State storage *cloud.State declined to persist a state snapshot
Important Factoids
- These account groups are provisioned via an Azure Entra Id SCIM connection
- Comment on existing issue: https://github.com/databricks/terraform-provider-databricks/issues/1860#issuecomment-1979824162
- Manually updating entitlements via the UI still works
Would you like to implement a fix?
Maybe later, need workaround at the moment.
We have found a workaround. The databricks_entitlements resource must be created with a workspace level provider. To succeed, all resources must be fully torn down and recreated.
data "databricks_group" "account_group" {
for_each = var.account_groups
display_name = each.key
provider = databricks.account
}
resource "databricks_mws_permission_assignment" "add_account_group" {
for_each = data.databricks_group.account_group
workspace_id = var.databricks_workspace_id
principal_id = each.value.id
permissions = ["USER"]
provider = databricks.account
}
resource "databricks_entitlements" "entitlements" {
for_each = data.databricks_group.account_group
group_id = each.value.id
allow_cluster_create = var.account_groups[each.key].allow_cluster_create
allow_instance_pool_create = var.account_groups[each.key].allow_instance_pool_create
workspace_access = var.account_groups[each.key].workspace_access
databricks_sql_access = var.account_groups[each.key].databricks_sql_access
provider = databricks.workspace
}
The problem lied in the terraform state showing that entitlements already existed and were set to false, when they did not exist at all. Terraform would then try and remove those entitlements and fail with a 400. A workspace level provider seems to avoid this problem.