terraform-provider-databricks
terraform-provider-databricks copied to clipboard
Support account-level provider with workspace-level resources
Changes
Partially addresses #2610, #3018.
This change will add an optional workspace_id field to all workspace-level resources. When using an account-level provider, if the workspace_id is specified, a workspace client will be constructed reusing that account-level provider's configuration. This means that users of the TF provider will only need to define a single provider block per account and will no longer need to create separate provider blocks per resource. The main downside is that the workspace ID needs to be specified for every single resource. This is slightly worse than today, where users need to specify depends_on for all workspace resources that don't have any dependencies.
For example:
data "databricks_spark_version" "latest" {
workspace_id = <WSID>
}
resource "databricks_cluster" "this" {
workspace_id = <WSID>
cluster_name = "singlenode-{var.RANDOM}"
spark_version = data.databricks_spark_version.latest.id
instance_pool_id = "<ID>"
num_workers = 0
autotermination_minutes = 10
spark_conf = {
"spark.databricks.cluster.profile" = "singleNode"
"spark.master" = "local[*]"
}
custom_tags = {
"ResourceClass" = "SingleNode"
}
}
will work with an account-level provider, given that that workspace with ID <WSID> belongs to the account.
Supported resources
All workspace-level resources that do not have a workspace_id field will be supported for this customization. The only resources with a workspace_id field are databricks_catalog_workspace_binding and databricks_metastore_assignment. The latter is already supported at the account-level, so you would be able to switch to using an account-level provider for this resource by importing it into your provider configuration.
Migration
To migrate from the current provider to this mechanism:
- Add the
workspace_idfield to all resources managed at the workspace level. - Use the account-level provider instead of the workspace-level provider.
Tests
- [ ]
make testrun locally - [ ] relevant change in
docs/folder - [ ] covered with integration tests in
internal/acceptance - [ ] relevant acceptance tests are passing
- [ ] using Go SDK
Codecov Report
Attention: 59 lines in your changes are missing coverage. Please review.
Comparison is base (
d3acc7b) 83.57% compared to head (4ef15d3) 83.37%. Report is 8 commits behind head on main.
Additional details and impacted files
@@ Coverage Diff @@
## main #3188 +/- ##
==========================================
- Coverage 83.57% 83.37% -0.21%
==========================================
Files 168 169 +1
Lines 15021 15198 +177
==========================================
+ Hits 12554 12671 +117
- Misses 1729 1781 +52
- Partials 738 746 +8
| Files | Coverage Δ | |
|---|---|---|
| aws/data_aws_assume_role_policy.go | 78.57% <ø> (ø) |
|
| aws/data_aws_bucket_policy.go | 94.73% <ø> (ø) |
|
| aws/data_aws_crossaccount_policy.go | 98.27% <ø> (ø) |
|
| catalog/data_catalogs.go | 100.00% <100.00%> (ø) |
|
| catalog/data_current_metastore.go | 100.00% <100.00%> (ø) |
|
| catalog/data_metastore.go | 100.00% <100.00%> (ø) |
|
| catalog/data_metastores.go | 86.66% <100.00%> (+2.05%) |
:arrow_up: |
| catalog/data_schemas.go | 100.00% <100.00%> (ø) |
|
| catalog/data_share.go | 100.00% <100.00%> (ø) |
|
| catalog/data_shares.go | 100.00% <100.00%> (ø) |
|
| ... and 34 more |
This change would make (almost) all of the problems I'm facing using the Databricks provider go away. Is there any sort of timeline on which we might expect these changes?