terraform-provider-databricks
terraform-provider-databricks copied to clipboard
[ISSUE] Issue with `databricks_grants` resource - causes error "Error: unknown is not fully supported yet"
Hi team, I'm using databricks_grants to apply permissions to some unity catalog elements, specifically databricks_storage_credential and databricks_external_location. I'm working within a workspace, granting access to groups and service principals that have been added to that workspace.
I'm finding that the initial grant succeeds, but a subsequent immediate "terraform plan" fails.
I can work around the issue by using a lifecycle ignore_changes = all block to make permission changes:
- the entire databricks_grant block must be commented out
- the config applied (removing all perms)
- the grant block can be uncommented, updated, and applied
Configuration
provider "databricks" {
}
Expected Behavior
The databricks_grants resource should not throw an error if the state has not changed. if the config has changed, the changes should be applied without error
Actual Behavior
The following error was observed: │ Error: unknown is not fully supported yet │ │ with databricks_grants.S3_databricks_sandbox_sc, │ on storage_credentials.tf line 8, in resource "databricks_grants" "S3_databricks_sandbox_sc": │ 8: resource "databricks_grants" "S3_databricks_sandbox_sc" {
Steps to Reproduce
- create databricks_storage_credential and databricks_external_location resources in a unity catalog metastore
- use a databricks_grants resource to apply permissions to them
- terraform apply succesfully applies the permissions
- terraform plan immediately fails with the error located here: https://github.com/databricks/terraform-provider-databricks/blob/master/catalog/resource_grants_test.go#L151
Terraform and provider versions
Terraform v1.3.7 on linux_amd64
- provider registry.terraform.io/databricks/databricks v1.9.0
- provider registry.terraform.io/hashicorp/aws v4.51.0
Can you share your configurations when you met the issue? Also can you take the debug?
TF_LOG_CORE=DEBUG terraform plan
Any update with this issue?
I noticed that issue when terraform plan forces replacement of the resource the grant was for e.g schema. Works fine when if there is no replacement.
Facing the same issues on our side. Our CI/CD pipelines are failing and don't replace the changes
Try adding a trailing "/" to your storage location path. This resolved the issue for me.
resource "databricks_schema" "read_schema" {
catalog_name = var.client_name
storage_root = format("abfss://%s@%s/", "read", var.storage_dfs[0])
resource "databricks_external_location" "storage" {
count = var.storage_count
name = format("storage_%s", count.index)
url = format("abfss://%s@%s/", "read", var.storage_dfs[count.index])
I previously did not include the trailing slash in the storage_root or url lines
storage_root = format("abfss://%s@%s", "read", var.storage_dfs[0])
This caused the resource to be recreated and then I would get the error regarding "unknown". Once I added the /, the resources are not recreated and the error has gone away.
This error seems to also prevent terraform destroy
.
- Has there been any update on this?
- Is there any workaround?
I there any update on this issue?
Hi @kalpesh-shimpi, we are prioritising working on the issue, will update you in few days regarding this.
Hi @tanmay-db do you have updates here? We are experiencing the same issue on Databricks TF provider 1.34.
A workaround exists via terraform taint
on each failing grant resource, but this seems unsustainable.
Hi @marvelous-melanie, after investigating, the issue seems to happen because the name is set to force_new
and that causes replacement. For example:
type StorageCredentialInfo struct {
Name string `json:"name" tf:"force_new"`
If the name has a capital letter, then internally in the terraform state it is stored as a small letter and running terraform apply
will succeed but running terraform plan
after it will fail because in state the name is stored in small case.
If there is no change to name and name consists of small case letters then terraform plan
followed by terraform apply
will succeed.
Are you seeing this issue with no change in the name?
Hi @tanmay-db We are not changing the name, but we are adding the storage_root
attribute to the schemas that the grants are associated with.
I think that is also because of the same issue, storage_root is also marked as force_new in the schema resource
StorageRoot string `json:"storage_root,omitempty" tf:"force_new"`
so any change addition to it would lead to replacement followed by the same error.
As for why replacement (destroy and create) causes fields to not be present in the resource data -- this still needs to be fixed that will solve this class of problems.
I want to note again (just for anybody else having the same issue!) that a workaround exists - run terraform taint
on each failing grant resource and your plan/apply will succeed.
It would be great to not need to do that though!
Update: the issue seems to happen due to how terraform plan works for forced new resources. The CustomizeDiff happens in two phases in this situation.
// The phases Terraform runs this in, and the state available via functions
// like Get and GetChange, are as follows:
//
// * New resource: One run with no state
// * Existing resource: One run with state
// * Existing resource, forced new: One run with state (before ForceNew),
// then one run without state (as if new resource)
// * Tainted resource: No runs (custom diff logic is skipped)
// * Destroy: No runs (standard diff logic is skipped on destroy diffs)
We have a working solution ready for this: https://github.com/databricks/terraform-provider-databricks/pull/3163, will merge this by early next week after more testing.
Hi @tanmay-db Can you please help with the status ?
Hi @sim1501, update: the terraform provider has been released: https://github.com/databricks/terraform-provider-databricks/releases/tag/v1.36.0 but this does not contain the fix for this issue, the fix for this will be done in another patch release that we are planning for later this week.
Hi all, update: we had 2 patch release after 1.36.0 for other issues but the fix for this issue wasn't part of those releases. We would be doing another patch release early next week to include the fix for this after merging it.
I am glad i got linked to this page: we are facing the same issues here. +1 for this update. Thanks.
Hi all, update: release has been done with the fix: https://github.com/databricks/terraform-provider-databricks/releases/tag/v1.37.1. Please let me know if you are still facing the issue. cc: @alexivanov-danone @sim1501 @marvelous-melanie @kolyarice @ethompsy
I can confirm this issue is no longer appearing when trying to recreate a new resource that has databricks_grants
attached to it.
I had to force terraform to use 1.37.1
as it was still trying to use 1.36.3
Thanks for the update.
Thanks @alexivanov-danone for the confirmation.