terraform-provider-null
terraform-provider-null copied to clipboard
null_data_source marked deprecated
null_data_source is now reporting deprecated. Please remove the deprecated label as it is still required as a stopgap for miscellaneous deployment pipelines. The provisioner functionality is used extensively with this resource to allow numerous other resources to depend and wait for the provisioner to complete.
Terraform Version
Terraform v1.0.5
on linux_amd64
+ provider registry.terraform.io/brinkmanlab/galaxy v0.3.0
+ provider registry.terraform.io/hashicorp/external v2.1.0
+ provider registry.terraform.io/hashicorp/http v2.1.0
+ provider registry.terraform.io/hashicorp/local v2.1.0
+ provider registry.terraform.io/hashicorp/null v3.1.0
+ provider registry.terraform.io/hashicorp/random v3.1.0
+ provider registry.terraform.io/kreuzwerker/docker v2.13.0
+ provider registry.terraform.io/terraform-providers/docker v2.7.2
Affected Resource(s)
Please list the resources as a list, for example:
- null_data_source
Terraform Configuration Files
# Copy-paste your Terraform configurations here - for large Terraform configs,
# please use a service like Dropbox and share a link to the ZIP file. For
# security, you can also encrypt the files using our GPG public key.
Debug Output
╷
│ Warning: Deprecated Resource
│
│ with module.galaxy.data.null_data_source.api_ready,
│ on .terraform/modules/galaxy/destinations/docker/main.tf line 10, in data "null_data_source" "api_ready":
│ 10: data "null_data_source" "api_ready" {
│
│ The null_data_source was historically used to construct intermediate values to re-use elsewhere in configuration, the same can now be achieved using locals
│
│ (and one more similar warning elsewhere)
╵
Expected Behavior
Don't report depreciated
Actual Behavior
Reported depreciated
Steps to Reproduce
Please list the steps required to reproduce the issue, for example:
-
terraform apply
References
https://stackoverflow.com/questions/62116684/how-to-make-terraform-wait-for-cloudinit-to-finish
I can add to that a use case where building/installing a lambda artifact with the null_resource
that needs to wait to be completed before it can be passed to the archive_file
resource.
This can only be achieved with the null_data_source
as far as I know.
resource "null_resource" "lambda_requirements_install" {
provisioner "local-exec" {
command = <<EOF
[[ ! -d "${local.lambda_build_dir}" ]] && mkdir -p "${local.lambda_build_dir}"
find "${local.lambda_build_dir}" ! -name .keep ! -name "${local.lambda_build_dir}" -exec rm -rf {} +
if [[ -e "${local.lambda_package_file}" ]]; then
pip -q install --upgrade -r "${local.lambda_package_file}" -t "${local.lambda_build_dir}"
fi
# pip deletes the file .keep file
touch "${local.lambda_build_dir}/.keep"
tar cf - -C "${local.lambda_source_dir}" . | tar xf - -C "${local.lambda_build_dir}"
EOF
interpreter = ["sh", "-c"]
}
triggers = {
requirements_file = base64sha256(file(local.lambda_package_file))
source_file = base64sha256(file(local.lambda_source_file))
}
}
data "null_data_source" "wait_for_lambda_exporter" {
inputs = {
# This ensures that this data resource will not be evaluated until
# after the null_resource has been created.
lambda_exporter_id = null_resource.lambda_requirements_install.id
# This value gives us something to implicitly depend on
# in the archive_file below.
source_dir = "${local.lambda_build_dir}/"
}
}
data "archive_file" "lambda_zip" {
source_dir = data.null_data_source.wait_for_lambda_exporter.outputs["source_dir"]
output_path = "${path.module}/lambda.zip"
type = "zip"
}
A year and no reply, I'm very concerned about this as well.
This deprecation is unacceptable without a replacement.
If null_data_source
is removed, there will no longer be any way (that I know of) to validate multiple variables at once via pre/postcondition
reliably without introducing either (A) a resource or (B) a dependency on some other provider's data sources.
(A) is not good, because it is incorrect, dilutes the plan, and would mess up modules that require their validations to be calculated upfront rather than later in the apply.
(B) is not good, because it is incorrect, would cause confusion to developers and change reviewers, and would introduce a non-zero chance of failing its read/refresh due to API calls, something that only (to my knowledge) null_data_source
safely lacks.
You can use time_sleep resource to handle the dependency with null_resource.
With the sample code above we can change to
resource "time_sleep" "wait_for_lambda_exporter" {
create_duration = "1s"
triggers = {
lambda_exporter_id = null_resource.lambda_requirements_install.id
source_dir = "${local.lambda_build_dir}/"
}
}
data "archive_file" "lambda_zip" {
source_dir = time_sleep.wait_for_lambda_exporter.triggers["source_dir"]
output_path = "${path.module}/lambda.zip"
type = "zip"
}
@qtruong77 the problem with time_sleep is, you don't know. how long the process, you wait for, will last.
@vagharshakus this time_sleep just a temporary resource to wait for the null_resource by its triggers value lambda_exporter_id = null_resource.lambda_requirements_install.id
then the data source will depend on this time_sleep. This resource for creating the order dependency in Terraform for the data source on the null_resource execution, we don't really need to provide the exact time to wait in time_sleep resource. At least that's for my case when building lambda with dependencies
@qtruong77 then I don't understand the construct / time_sleep
resource as the whole , especially the sense of create_duration
attribute. I will take a deeper look into the docu of time_sleep
.
Terraform 1.4 will contain a new terraform_data
resource which can accept input data of any type, reflections of that data into an unknown output of the same type, and value updates. Terraform prereleases are available if anyone wants to try out the upcoming functionality.
Reference: https://github.com/hashicorp/terraform/blob/main/website/docs/language/resources/terraform-data.mdx
Terraform 1.4.0 has been released today with the new terraform_data
resource. By configuring the input
attribute and having downstream resources dependent on the reflected output
attribute, it should cause Terraform's graph to be setup where the terraform_data
resource will execute between others. Similar to other Terraform resources, these can use provisioners as a last resort or implement lifecycle
block precondition
/postcondition
. If there are any feature requests or bug reports with the terraform_data
resource, please create an issue in the Terraform core issue tracker.
We can't replace null_data_source
with terraform_data
because resources behave differently from resources. If I want to have a data source that triggers validation (via precondition
) before I pass it to a module, I can't:
╷
│ Error: Invalid for_each argument
│
│ on ../../../modules/s3_interface_names/main.tf line 81, in module "names":
│ 81: for_each = terraform_data.validation.output
│ ├────────────────
│ │ terraform_data.validation.output is a object, known only after apply
│
│ The "for_each" map includes keys derived from resource attributes that cannot be determined until apply, and so Terraform cannot determine the full set of keys that will identify the instances of this resource.
│
│ When working with unknown values in for_each, it's better to define the map keys statically in your configuration and place apply-time results only in the map values.
│
│ Alternatively, you could use the -target planning option to first apply only the resources that the for_each value depends on, and then apply a second time to fully converge.
╵
I'm stuck using null_data_source
until there's an equivalent terraform_data
data source instead of resource. I'd use the new validation
blocks, but they're only advisory/warnings, and don't actually block planning!
Hi @skeggse 👋 The hashicorp/null
provider will be intentionally and wholly being deprecated at some point in preference of native solutions within Terraform. If your use case cannot be solved with other native Terraform functionality, it sounds like something that should be raised in the upstream Terraform issue tracker.
There could be a few potential ideas that would help without involving this utility provider:
- As you mentioned, some form of Terraform built-in provider data source, similar to the managed resource.
- Some enhanced form of built-in configuration validation that can block operations before values are passed to other modules.
The upstream Terraform maintainers may already have certain enhancement opinions or recommendations using existing configuration concepts about your specific situation though, so having that discussion with them should hopefully provide some better guidance.
Offhandedly though without seeing your fuller configuration, does using the terraform_data
resource input
attribute value as the reference instead of output
help? The output
attribute is there to intentionally copy the input
value while still marking it as an unknown value during planning, which would cause the error you see. If you do not need the value to specifically be marked as unknown, then you should be able to reference the input
value directly.
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.