terraform-provider-databricks
terraform-provider-databricks copied to clipboard
[ISSUE]`databricks_permissions ` with cluster_id objects shows permanent drift if the owner is not the same as the TF identifier
Configuration
resource "databricks_permissions" "per_user_cluster_usage" {
for_each = toset(local.user_specific_clusters)
cluster_id = databricks_cluster.per_user[each.key].id
access_control {
user_name = each.key
permission_level = "CAN_RESTART"
}
}
Expected Behavior
tf plan should be clean
Actual Behavior
tf plan shows planned changes
# databricks_permissions.per_user_cluster_usage["[email protected]"] will be updated in-place
~ resource "databricks_permissions" "per_user_cluster_usage" {
id = "/clusters/xxx"
# (2 unchanged attributes hidden)
- access_control {
- permission_level = "CAN_MANAGE" -> null
- user_name = "[email protected]" -> null
# (2 unchanged attributes hidden)
}
- access_control {
- permission_level = "CAN_RESTART" -> null
- user_name = "[email protected]" -> null
# (2 unchanged attributes hidden)
}
+ access_control {
+ permission_level = "CAN_RESTART"
+ user_name = "[email protected]"
}
}
Terraform and provider versions
1.67.0
Important Factoids
Similar issues have been reported and fixed for SQL warehouse object types https://github.com/databricks/terraform-provider-databricks/issues/3730
Being able to explicitly set the owner of the cluster would probably help, since these permissions which keep on drifting seem to be derived from the owner of the cluster, which is implicitly set upon creation to the authenticated user, but a related issue is not currently being followed-up upon: https://github.com/databricks/terraform-provider-databricks/issues/2543