terraform-provider-databricks icon indicating copy to clipboard operation
terraform-provider-databricks copied to clipboard

[ISSUE] Issue with `databricks_job` resource not deploying the correct job cluster's `Availability Zone`

Open TsuRamen opened this issue 8 months ago • 1 comments
trafficstars

Configuration

resource "databricks_job" "test_jobs" {
  name = "test-job"

  job_cluster {
    job_cluster_key = "test-job-cluster"
    new_cluster {
      data_security_mode  = "USER_ISOLATION"
      spark_version       = var.spark_version
      policy_id           = var.default_cluster_policy
      driver_node_type_id = "rd-fleet.2xlarge"
      node_type_id        = "rd-fleet.4xlarge"
      enable_elastic_disk = true
      autoscale {
        min_workers = 1
        max_workers = 5
      }
      aws_attributes {
        availability           = "SPOT_WITH_FALLBACK"
        first_on_demand        = 1
        zone_id                = "auto"
        spot_bid_price_percent = 100
      }
    }
  }
  task {
    task_key        = "test_task"
    job_cluster_key = "test-job-cluster"
    notebook_task {
      notebook_path = databricks_notebook.build_incremental_update.path
    }
  }
}

Expected Behavior

I expect the job cluster to have Availability Zone to be auto as set in Terraform. If auto is not allowed with fleet instance, I expect Terraform to raise an error.

Actual Behavior

Terraform Version 1.49

Terraform Version 1.49, after deployment, on Databricks, Availability Zone shows up as HA, instead of auto as below screenshot. Image

However, I can click the drop down and manually change it to auto. This means that Databricks allows auto availability zone for fleet instance and that this is a Terraform Databricks Provider bug. Image

Terraform Version 1.69

Terraform Version 1.69, after deployment, on Databricks, the Job page shows AUTO. Image However, when I click on Configure, the Availability Zone field is blank. I can also manually change it to auto. Image

Testing using databricks_cluster instead of databricks_jobs

When I tried to use databricks_cluster with the same configuration (using fleet + zone_id=auto), the cluster is correctly created with zone_id=auto. This also means that this is a bug with the databricks_jobs resource, since the databricks_clusters works.

  data_security_mode  = "USER_ISOLATION"
  spark_version       = var.spark_version
  policy_id           = var.default_cluster_policy
  driver_node_type_id = "rd-fleet.2xlarge"
  node_type_id        = "rd-fleet.4xlarge"
  enable_elastic_disk = true
  autoscale {
    min_workers = 1
    max_workers = 5
  }
  aws_attributes {
    availability           = "SPOT_WITH_FALLBACK"
    first_on_demand        = 1
    zone_id                = "auto"
    spot_bid_price_percent = 100
  }
}

Image

Steps to Reproduce

Terraform Version 1.49 and Terraform Version 1.69

TsuRamen avatar Mar 06 '25 07:03 TsuRamen