terraform-provider-databricks icon indicating copy to clipboard operation
terraform-provider-databricks copied to clipboard

[ISSUE] Issue with `databricks_cluster` resource

Open manojhegde97 opened this issue 1 year ago • 1 comments

workload_type configuration in databricks_cluster resource is not taking effect, we can see a blank in the cluster JSON . however when we use the true statement instead of false it is accepting,

Configuration

resource "databricks_cluster" "this" {
  provider                = databricks.ws
  cluster_name            = "manoj-job-new"
  spark_version           = data.databricks_spark_version.latest_lts.id
  node_type_id            = "c5d.xlarge"
  driver_node_type_id     = "c5d.xlarge"


  autoscale {
    min_workers = 1
    max_workers = 1
  }

  aws_attributes {
    zone_id = "auto"
  }

  # Prevent submitting job to run in interactive cluster
  workload_type {
    clients {
      jobs      = false
      notebooks = true
    }
  }

}

Expected Behavior

It should take the false statement for workload_type

Actual Behavior

only true statement is accepting

Terraform and provider versions

Terraform v1.6.2 , databricks : 1.47.0

manojhegde97 avatar Jun 21 '24 07:06 manojhegde97

I encountered this issue today.

With Databricks Terraform Provider v1.45.0 this correctly creates a cluster with workload tags set that prevent Jobs from running on it.

But when I update the Databricks provider to v1.46.0 or later, jobs=false is not propagated to the created cluster.

From looking at the release notes for v1.46.0, it seems like it may have to do with this change.

seanjw13 avatar Aug 27 '24 20:08 seanjw13

This is causing issues at my company as well.

Woffendm avatar Aug 30 '24 16:08 Woffendm

this is not fixed, I just tried with 1.52.0

resource "databricks_cluster" "deng_uc_cluster" { provider = databricks.created_workspace cluster_name = "DEng-UC-Cluster" spark_version = "14.3.x-scala2.12" node_type_id = "m6i.xlarge" driver_node_type_id = "m6i.xlarge" is_pinned = true data_security_mode = "USER_ISOLATION" runtime_engine = "STANDARD" num_workers = 0 enable_elastic_disk = false policy_id = <some_id>

enable_local_disk_encryption = false autotermination_minutes = 25

aws_attributes { first_on_demand = 1 ebs_volume_type = "GENERAL_PURPOSE_SSD" ebs_volume_count = 1 ebs_volume_size = 128 zone_id = "auto" }

autoscale { min_workers = 1 max_workers = 10 }

workload_type { clients { jobs = false notebooks = true } } }

and the postman API response mentions

image

cpanpalia avatar Sep 18 '24 06:09 cpanpalia