terraform-provider-databricks icon indicating copy to clipboard operation
terraform-provider-databricks copied to clipboard

[ISSUE] Issue with `databricks_sql_endpoint` resource constantly detecting change required

Open Etherdaemon opened this issue 1 year ago • 5 comments

Configuration

resource "databricks_sql_endpoint" "default" {
  name                      = "Default"
  cluster_size              = "X-Small"
  max_num_clusters          = 2
  auto_stop_mins            = 1
  enable_photon             = true
  enable_serverless_compute = true
  warehouse_type            = "PRO"

  provider = databricks.workspace
}

Expected Behavior

First run should create the endpoint and subsequent runs with no changes should not detect any actions required against the sql endpoint

Actual Behavior

Subsequent Terraform plans and applies results in detection of the health attribute being changed and requiring a SQL endpoint update.

Steps to Reproduce

  1. Run Terraform apply with minimal SQL endpoint configuration
  2. Run Terraform plan to see + health = (known after apply) show up as a change requiring the sql endpoint be updated in place

Example:

Terraform will perform the following actions:
  # databricks_sql_endpoint.default will be updated in-place
  ~ resource "databricks_sql_endpoint" "default" {
      + health                    = (known after apply)
        id                        = "9571d94ff8eb7432"
        name                      = "Default"
        # (15 unchanged attributes hidden)
        # (1 unchanged block hidden)
    }

Terraform and provider versions

Terraform version: 1.5.4 Databricks provider version: v1.34.0

Is it a regression?

Yes, it worked correctly in v1.33.0

Etherdaemon avatar Jan 15 '24 03:01 Etherdaemon

Hmm - can you double-check the provider version? I don't see this behaviour in 1.34.0 - the health is marked as computed/read-only in PR that was released in 1.34.0: https://github.com/databricks/terraform-provider-databricks/pull/3044/files#diff-07501d57ff6b166a2f3c5bc78418aa9881bcce682f12737fa3fd5f108345195eR67

alexott avatar Jan 15 '24 07:01 alexott

Yes it appears to be 1.34.0

terraform init snippet

Initializing provider plugins...
- Finding databricks/databricks versions matching ">= 1.17.0, < 2.0.0"...
- Finding hashicorp/aws versions matching ">= 4.62.0"...
- Finding latest version of hashicorp/http...
- Finding latest version of hashicorp/dns...
- Installing hashicorp/dns v3.4.0...
- Installed hashicorp/dns v3.4.0 (signed by HashiCorp)
- Installing databricks/databricks v1.34.0...
- Installed databricks/databricks v1.34.0 (self-signed, key ID 92A95A66446BCE3F)
- Installing hashicorp/aws v5.32.1...
- Installed hashicorp/aws v5.32.1 (signed by HashiCorp)
- Installing hashicorp/http v3.4.1...
- Installed hashicorp/http v3.4.1 (signed by HashiCorp)

Plan snippet

Terraform will perform the following actions:
  # databricks_sql_endpoint.default will be updated in-place
  ~ resource "databricks_sql_endpoint" "default" {
      + health                    = (known after apply)
        id                        = "e859966dc41752a1"
        name                      = "Default"
        # (15 unchanged attributes hidden)
        # (1 unchanged block hidden)
    }

Etherdaemon avatar Jan 15 '24 23:01 Etherdaemon

Still not reproducible:

resource "databricks_sql_endpoint" "default" {
  name                      = "Default"
  cluster_size              = "X-Small"
  max_num_clusters          = 2
  auto_stop_mins            = 1
  enable_photon             = true
  enable_serverless_compute = true
  warehouse_type            = "PRO"
}

Creation:

# tf-tests/issue-3116\>terraform apply

Terraform used the selected providers to generate the following execution plan. Resource actions are
indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # databricks_sql_endpoint.default will be created
  + resource "databricks_sql_endpoint" "default" {
      + auto_stop_mins            = 1
      + cluster_size              = "X-Small"
      + creator_name              = (known after apply)
      + data_source_id            = (known after apply)
      + enable_photon             = true
      + enable_serverless_compute = true
      + health                    = (known after apply)
      + id                        = (known after apply)
      + jdbc_url                  = (known after apply)
      + max_num_clusters          = 2
      + name                      = "Default"
      + num_active_sessions       = (known after apply)
      + num_clusters              = (known after apply)
      + odbc_params               = (known after apply)
      + spot_instance_policy      = "COST_OPTIMIZED"
      + state                     = (known after apply)
      + warehouse_type            = "PRO"
    }

Plan: 1 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

databricks_sql_endpoint.default: Creating...
databricks_sql_endpoint.default: Creation complete after 8s [id=4041dc2e82243f02]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

Plan:

# tf-tests/issue-3116\>terraform plan
databricks_sql_endpoint.default: Refreshing state... [id=4041dc2e82243f02]

No changes. Your infrastructure matches the configuration.

Terraform has compared your real infrastructure against your configuration and found no differences,
so no changes are needed.
# tf-tests/issue-3116\>terraform -v
Terraform v1.5.7
on darwin_arm64
+ provider registry.terraform.io/databricks/databricks v1.34.0

Please collect logs for terraform plan

alexott avatar Jan 16 '24 06:01 alexott

This issue also affects me. For resources created before the version upgrade a health attribute is not in the state and repeatedly reported at plan time. For newly created resources all is fine, health does not come up in plan.

emmbea avatar Jan 25 '24 10:01 emmbea

Same here

jasondamour avatar Feb 09 '24 08:02 jasondamour

I have a similar problem but with different attributes though. I am on databricks/databricks v1.39.0

Table definition:

resource "databricks_sql_table" "test_table" {
  name               = "test_table"
  catalog_name       = "test_catalog"
  schema_name        = "test_schema"
  table_type         = "MANAGED"
  data_source_format = "DELTA"

  column {
    name = "col1"
    type = "timestamp"
  }
  column {
    name = "col2"
    type = "double"
  }
  column {
    name = "col3"
    type = "int"
  }
  column {
    name = "col4"
    type = "timestamp_ntz"
  }
  properties = {
    "delta.enableChangeDataFeed"       = true,
    "delta.autoOptimize.autoCompact"   = true,
    "delta.autoOptimize.optimizeWrite" = true
  }
}

During subsequent plans after initial apply, we keep seeing updates to the below table properties. These properties are not set up by the TF definition and seems to have been applied by Databricks implicitly. So TF sees such properties as modified and tries to UNSET the properties with every subsequent plan/apply.

09:46:47    # databricks_sql_table.test_table will be updated in-place
09:46:47    ~ resource "databricks_sql_table" "test_table" {
09:46:47          id                 = "test_catalog.test_schema.test_table"
09:46:47          name               = "test_table"
09:46:47        ~ properties         = {
09:46:47            - "delta.feature.changeDataFeed"     = "supported" -> null
09:46:47            - "delta.feature.timestampNtz"       = "supported" -> null
09:46:47              # (9 unchanged elements hidden)
09:46:47          }
09:46:47          # (5 unchanged attributes hidden)
09:46:47  
09:46:47          # (8 unchanged blocks hidden)
09:46:47      }

vsluc avatar Apr 08 '24 20:04 vsluc

@alexott Do you think what I reported above can be tackled in this issue or should I open up a new one? Please advise

vsluc avatar Apr 09 '24 16:04 vsluc

Please open a new issue - this was fixed in #3227. We just forgot to close this one.

alexott avatar Apr 09 '24 17:04 alexott