terraform-provider-databricks
terraform-provider-databricks copied to clipboard
[ISSUE] Configuration of resource `databricks_entitlements` not reflected in workspace
Configuration
data "databricks_group" "users" {
display_name = "users"
}
resource "databricks_entitlements" "entitlements_users" {
group_id = data.databricks_group.users.id
databricks_sql_access = false
workspace_access = true
}
Expected Behavior
Entitlements of the users
group should reflect the Terraform configuration.
Actual Behavior
Terraform apply proceeds with no error. However, entitlements of the users
group do not reflect the Terraform configuration. Databricks SQL Access is checked as enabled, despite databricks_sql_access
being set to false
.
Steps to Reproduce
-
terraform apply
Terraform and provider versions
Terraform version: 1.3.4 Databricks provider version: 1.7.0
Debug Output
Terraform commands show no errors.
Important Factoids
- Besides the general
users
group, individual groups already havedatabricks_sql_access = false
set independently via Terraform. - If I set the entitlement
databricks_sql_access
to false manually via the Admin Console, and trigger anotherterraform plan
, Terraform unexpectedly reports a change fromdatabricks_sql_access = true
todatabricks_sql_access = false
is planned.
@camilo-s I reproduced the same issue. Can you do "terraform plan" and "terraform apply" again? On my case, new plan to change entitlements was generated and it worked.
@TakeshiMatsukura thanks for your reply.
The behavior in my case was similar (see my second factoid above). I manually set databricks_sql_access = false
, then terraform didn't recognize this, but it did upon its planed databricks_sql_access = true -> false
I likewise have to run the apply twice to pick up the databricks_sql_access = true -> false
when setting false
on the users
group.
I tried running apply twice in a row (I actually tried 5 times in a row) but it did not work.
In my case I wanted to set allow_cluster_create = false
so I ran terraform apply
with the following configuration:
data "databricks_group" "users" {
display_name = "users"
}
resource "databricks_entitlements" "users" {
group_id = data.databricks_group.users.id
allow_cluster_create = false
}
This made it so databricks_sql_access = false
and workspace_access = false
, and locked the users out of the workspace.
So I tried to revert it by updating the configuration to:
data "databricks_group" "users" {
display_name = "users"
}
resource "databricks_entitlements" "users" {
group_id = data.databricks_group.users.id
allow_cluster_create = false
workspace_access = true
databricks_sql_access = true
}
I ran terraform apply
5 times in a row. It said "Modifications complete", but the entitlements were not set (the users were still locked out and the checkboxes in Admin Console -> Groups -> users -> Entitlements were not set either)
Any fix for this coming out soon?
Terraform version: 1.5.7 Databricks provider version: 1.28.1
Hi there,
I came across this issue and wanted to suggest a potential workaround that might help. Have you considered using the -out
option with terraform plan
? This option allows you to save the execution plan to a file, ensuring that the exact plan you reviewed gets applied.
For example, you can first run terraform plan -out=plan-output
to save the plan. After reviewing and ensuring that it contains the correct changes, you can then apply this specific plan using terraform apply plan-output
.
This approach can be particularly useful in ensuring consistency and predictability, especially in complex environments where the state might change between planning and applying. It might help address the issue of entitlements not being correctly updated as per the Terraform configuration.
Hope this helps!
Hi there,
I came across this issue and wanted to suggest a potential workaround that might help. Have you considered using the
-out
option withterraform plan
? This option allows you to save the execution plan to a file, ensuring that the exact plan you reviewed gets applied.For example, you can first run
terraform plan -out=plan-output
to save the plan. After reviewing and ensuring that it contains the correct changes, you can then apply this specific plan usingterraform apply plan-output
.This approach can be particularly useful in ensuring consistency and predictability, especially in complex environments where the state might change between planning and applying. It might help address the issue of entitlements not being correctly updated as per the Terraform configuration.
Hope this helps!
This doesn't work for me with the latest Databricks provider.
We are using the latest version 1.36.3 and still cannot get this to work. I am running the resource against the workspace provider. below is my code. Is there any workaround or plan to fix this?
resource "databricks_entitlements" "workspace-users" {
provider = databricks.reserve_workspace
group_id = databricks_group.apps_access.id
databricks_sql_access = true
workspace_access = true
}
Hit this confusing issue this week. A few notes for others struggling with the same thing:
- If you are attempting to manage entitlements for a workspace service principal, using the
databricks_service_principal
resource to directly set the entitlements will work correctly, at least of1.37.1
. - If you must use both
databricks_entitlements
anddatabricks_service_principal
, you have to ignore the changes on the overlapping properties in thedatabricks_service_principal
(as with any other resource of that nature).
Hello we am having the same issue. A service principal operating with roles workspace admin
and account admin
cannot assign entitlements to an account group in the workspace.
Here is the terraform (run with an account level provider)
data "databricks_group" "account_group" {
for_each = var.account_groups
display_name = each.key
}
resource "databricks_mws_permission_assignment" "add_account_group" {
for_each = data.databricks_group.account_group
workspace_id = var.databricks_workspace_id
principal_id = each.value.id
permissions = ["USER"]
}
resource "databricks_entitlements" "entitlements" {
for_each = data.databricks_group.account_group
group_id = each.value.id
allow_cluster_create = var.account_groups[each.key].allow_cluster_create
allow_instance_pool_create = var.account_groups[each.key].allow_instance_pool_create
workspace_access = var.account_groups[each.key].workspace_access
databricks_sql_access = var.account_groups[each.key].databricks_sql_access
}
When run, terraform plan runs as expected, and terraform apply succeeds. However none of the changes are made in the workspace. When viewing the debug logs from the terraform apply we see:
HTTP/2.0 400 Bad Request
< {
< "detail": "No such attribute with the name : entitlements in the current resource",
< "schemas": [
< "urn:ietf:params:scim:api:messages:2.0:Error"
< ],
< "scimType": "invalidPath",
< "status": "400"
< }: @module=databricks tf_req_id=a656c179-a0b3-26e7-3fa7-9e7debc516f5 tf_resource_type=databricks_entitlements tf_rpc=ApplyResourceChange @caller=/home/runner/work/terraform-provider-databricks/terraform-provider-databricks/logger/logger.go:33 tf_provider_addr=registry.terraform.io/databricks/databricks timestamp=2024-03-05T23:22:25.832Z
2024-03-05T23:22:25.838Z [DEBUG] provider.terraform-provider-databricks_v1.36.1: PATCH /api/2.0/accounts/<id>/scim/v2/Groups/<id>
> {
> "Operations": [
> {
> "op": "remove",
> "path": "entitlements",
> "value": [
> {
> "value": "allow-cluster-create"
> },
> {
> "value": "allow-instance-pool-create"
> },
> {
> "value": "databricks-sql-access"
> },
> {
> "value": "workspace-access"
> }
> ]
> },
> {
> "op": "add",
> "path": "entitlements",
> "value": [
> {
> "value": "databricks-sql-access"
> },
> {
> "value": "workspace-access"
> }
> ]
> }
> ],
> "schemas": [
> "urn:ietf:params:scim:api:messages:2.0:PatchOp"
> ]
> }
We can run this over and over, all succeeding, and no changes are made. I am still able to manually update entitlements on the account groups.
This worked previously for us, as recently as 2 weeks ago. It seems like the bug that was recently introduced. Any suggestions?
I believe the issue with this resource is that the update behavior is not working as expected. In some circumstances, removing and then adding entitlements in the same request may not actually cause them to be persisted, as noted by @shoopgates. Sorry that this has affected so many of you! I have a PR here to address this: https://github.com/databricks/terraform-provider-databricks/pull/3434. I'll aim to get this into the next TF release.