terraform-provider-databricks
terraform-provider-databricks copied to clipboard
[ISSUE] Resource databricks_cluster removes spark_env_vars without warning
Terraform removes manually configured spark_env_vars every time any change to a databricks_cluster needs to be done even though spark_env_vars is not configured/maintained by terraform. Even while trying to use lifecycle ignore_changes spark_env_vars terraform always removes any manually configured environment variables.
Configuration
resource "databricks_cluster" "this" {
cluster_name = "Cluster Display Name"
single_user_name = lower("[email protected]")
node_type_id = "Standard_DS3_v2"
spark_version = "9.1.x-cpu-ml-scala2.12"
autotermination_minutes = 15
is_pinned = true
autoscale {
min_workers = 1
max_workers = 2
}
spark_conf = {
"spark.databricks.passthrough.enabled" : true
"spark.hadoop.fs.permissions.umask-mode" : "000"
}
}
Expected Behavior
Cluster should be created (works).
Changes to e.g. autotermination_minutes should be done if changed in terraform config (works).
Environment variables which are set by the user should not be touched (does not work).
Actual Behavior
Every time terraform decides that a change is necessary it will do so and warn about what will change. It never mentiones spark_env_vars. Yet the list of configured environment variables is always reset to an empty list.
Steps to Reproduce
- Let a cluster be created.
- Manually configure Spark environment variables.
- Let the cluster be changed by e.g. increasing the
autotermination_minutes. - Manually configured Spark environment variables are gone.
Terraform and provider versions
Terraform: v1.1.7 on darwin_amd64 Databricks: 0.5.4
@mstreuhofer works as designed. all manual configuration changes would always be overridden by tf.
@nfx thanks for your response. but i do have a follow-up question if i may.
shouldn't a manual change be shown as such before being reverted?
shouldn't i be able to ignore certain changes by using constructs like the following?
lifecycle {
ignore_changes = [
spark_env_vars,
]
}
@nfx does your reopening of this issue mean the "invalid" tag assigned to this issue is infact invalid? ;)
Following up - is this issue still relevant?
The behaviour of the databricks provider has not changed if that is what you mean. So yes the issue is still relevant. Thanks! (tested just now with databricks provider version 1.2.0)