terraform-provider-databricks
terraform-provider-databricks copied to clipboard
allow creating single node pools
Changes
Creating a single node pool.
Setting num_worker = 0, instance_pool_id and driver_instance_pool_id to the same pool creates a single node pool.
However, the UI explicitly says that spark.databricks.cluster.profile should not be set to singleNode.
Tests
Currently trying to publish into the terraform registry to verify it by hand.
- [x]
make testrun locally - [x] relevant change in
docs/folder - [x] covered with integration tests in
internal/acceptance - [x] relevant acceptance tests are passing
- [x] using Go SDK
Please add unit & acceptance tests for this change.
For now I want to test it somehow before actually writing those tests, create a terraform provider under my own namespace and try it out if it actually works.
I want to also start a discussion, maybe someone with experience on single node pools might see it.
@alexott
Added docs, acceptance test and unit tests.
Don't know about make test locally, because it doesn't pass on clean master either .
I released this under my own namespace and tested it.
There is a known issue with tests failing when the DEFAULT profile is in ~/.databrickscfg - you can temporary comment it out.
Codecov Report
Merging #2886 (67df664) into master (b1797d5) will increase coverage by
0.00%. Report is 1 commits behind head on master. The diff coverage is100.00%.
@@ Coverage Diff @@
## master #2886 +/- ##
=======================================
Coverage 84.15% 84.15%
=======================================
Files 153 153
Lines 13415 13417 +2
=======================================
+ Hits 11289 11291 +2
Misses 1503 1503
Partials 623 623
| Files | Coverage Δ | |
|---|---|---|
| clusters/clusters_api.go | 85.42% <100.00%> (+0.11%) |
:arrow_up: |
Looks like its running now and its 🟢
✓ Filling vendor folder with library code ...
✓ Linting source code with https://staticcheck.io/ ...
✓ Running tests ...
∅ . (1ms)
✓ common (1.443s) (coverage: 78.3% of statements)
✓ aws (1.448s) (coverage: 76.6% of statements)
✓ internal/acceptance (1.647s)
✓ logger (748ms) (coverage: 16.7% of statements)
✓ libraries (2.433s) (coverage: 97.2% of statements)
✓ mlflow (1.027s) (coverage: 92.6% of statements)
✓ commands (4.349s) (coverage: 92.3% of statements)
✓ policies (466ms) (coverage: 91.7% of statements)
✓ pools (568ms) (coverage: 98.8% of statements)
✓ pipelines (2.337s) (coverage: 95.3% of statements)
✓ permissions (2.631s) (coverage: 88.1% of statements)
✓ provider (587ms) (coverage: 89.1% of statements)
✓ jobs (6.709s) (coverage: 94.8% of statements)
✓ qa (531ms) (coverage: 85.8% of statements)
✓ access (7.186s) (coverage: 91.2% of statements)
✓ secrets (1.092s) (coverage: 90.6% of statements)
✓ sql/api (563ms) (coverage: 86.9% of statements)
✓ repos (1.863s) (coverage: 93.3% of statements)
✓ serving (939ms) (coverage: 89.1% of statements)
✓ mws (6.613s) (coverage: 88.6% of statements)
✓ tokens (900ms) (coverage: 95.6% of statements)
✓ clusters (9.868s) (coverage: 91.4% of statements)
✓ scim (3.612s) (coverage: 96.2% of statements)
✓ workspace (2.087s) (coverage: 89.8% of statements)
✓ sql (4.66s) (coverage: 86.6% of statements)
✓ exporter (14.45s) (coverage: 80.0% of statements)
✓ storage (6.55s) (coverage: 94.0% of statements)
✓ catalog (15.229s) (coverage: 88.6% of statements)
DONE 1525 tests, 135 skipped in 18.090s
@mgyucht for some reason acceptance tests on AWS are failing with bad token
Is there something missing? Is the bad token relevant to this change?