terraform-provider-ignition icon indicating copy to clipboard operation
terraform-provider-ignition copied to clipboard

Saved plan fails to apply when datasource depends on resource

Open hashibot opened this issue 8 years ago • 19 comments

This issue was originally opened by @alkar as hashicorp/terraform#15830. It was migrated here as a result of the provider split. The original body of the issue is below.


Terraform Version

v0.9.11

Terraform Configuration Files

resource "random_id" "x" {
  byte_length = 1
}

data "ignition_file" "x" {
  filesystem = "root"
  path       = "/x"

  content {
    content = "${random_id.x.hex}"
  }
}

data "ignition_systemd_unit" "x" {
  name    = "x.service"
  content = ""
}

data "ignition_config" "x" {
  files   = ["${data.ignition_file.x.id}"]
  systemd = ["${data.ignition_systemd_unit.x.id}"]
}

Expected Behavior

It should be able to apply from a saved plan.

Steps to Reproduce

Applying directly works:

$ terraform apply
data.ignition_systemd_unit.x: Refreshing state...
random_id.x: Creating...
  b64:         "" => "<computed>"
  b64_std:     "" => "<computed>"
  b64_url:     "" => "<computed>"
  byte_length: "" => "1"
  dec:         "" => "<computed>"
  hex:         "" => "<computed>"
random_id.x: Creation complete (ID: dg)
data.ignition_file.x: Refreshing state...
data.ignition_config.x: Refreshing state...

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the `terraform show` command.

State path:

However, applying from a saved plan file:

$ terraform plan -out sp && terraform apply sp
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

data.ignition_systemd_unit.x: Refreshing state...
The Terraform execution plan has been generated and is shown below.
Resources are shown in alphabetical order for quick scanning. Green resources
will be created (or destroyed and then created if an existing resource
exists), yellow resources are being changed in-place, and red resources
will be destroyed. Cyan entries are data sources to be read.

Your plan was also saved to the path below. Call the "apply" subcommand
with this plan file and Terraform will exactly execute this execution
plan.

Path: sp

<= data.ignition_config.x
    files.#:   "<computed>"
    rendered:  "<computed>"
    systemd.#: "1"
    systemd.0: "c3582dce636f921cdc80f72fd27057f1fd14bd0b2a656383e2da0de1b09d3d2c"

<= data.ignition_file.x
    content.#:         "1"
    content.0.content: "${random_id.x.hex}"
    content.0.mime:    "text/plain"
    filesystem:        "root"
    path:              "/x"

+ random_id.x
    b64:         "<computed>"
    b64_std:     "<computed>"
    b64_url:     "<computed>"
    byte_length: "1"
    dec:         "<computed>"
    hex:         "<computed>"


Plan: 1 to add, 0 to change, 0 to destroy.
random_id.x: Creating...
  b64:         "" => "<computed>"
  b64_std:     "" => "<computed>"
  b64_url:     "" => "<computed>"
  byte_length: "" => "1"
  dec:         "" => "<computed>"
  hex:         "" => "<computed>"
random_id.x: Creation complete (ID: Qg)
data.ignition_file.x: Refreshing state...
data.ignition_config.x: Refreshing state...
Error applying plan:

1 error(s) occurred:

* data.ignition_config.x: data.ignition_config.x: invalid systemd unit "c3582dce636f921cdc80f72fd27057f1fd14bd0b2a656383e2da0de1b09d3d2c", unknown systemd unit id

Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.

$ terraform apply sp
Failed to load backend: This plan was created against an older state than is current. Please create
a new plan file against the latest state and try again.

Terraform doesn't allow you to run plans that were created from older
states since it doesn't properly represent the latest changes Terraform
may have made, and can result in unsafe behavior.

Plan Serial:    0
Current Serial: 1

$ terraform plan -out sp && terraform apply sp
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

random_id.x: Refreshing state... (ID: Qg)
data.ignition_systemd_unit.x: Refreshing state...
data.ignition_file.x: Refreshing state...
data.ignition_config.x: Refreshing state...
No changes. Infrastructure is up-to-date.

This means that Terraform did not detect any differences between your
configuration and real physical resources that exist. As a result, Terraform
doesn't need to do anything.

Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

References

Possibly hashicorp/terraform#11518

hashibot avatar Aug 16 '17 19:08 hashibot

Are there any updates on this? Any workarounds?

alkar avatar Aug 29 '17 16:08 alkar

encountered a very similar error message with aws resources, nothing to do with ignition. I reckon you opened your original issue in the right place.

foragerr avatar Oct 11 '17 21:10 foragerr

I am seeing this behavior with files in ignition_config as well. Seems this behavior is exclusive to saved plans a I confirmed this is not an issue using -auto-approve with terraform apply.

All I can think is cache and resource data id is inconsist between plan and apply stage, as the id is no longer valid (has changed). See: https://github.com/terraform-providers/terraform-provider-ignition/blob/master/ignition/resource_ignition_config.go#L256

mootpt avatar Nov 22 '17 19:11 mootpt

Also seeing this

module.***.data.ignition_config.user_data: data.ignition_config.user_data: invalid file "9decbc6c412b76e16962a558d5c3ec1082e0f4c1fc6aca50b3f8cdd0ad3777ad", unknown file id
$ terraform version
Terraform v0.11.0
+ provider.aws v1.3.0
+ provider.ignition v1.0.0
+ provider.template v1.0.0

alexrudd avatar Nov 23 '17 09:11 alexrudd

I confirm that it's happening here as well. I think ignition_* data sources should be resources instead.

giacomocariello avatar Nov 24 '17 17:11 giacomocariello

Looks like they originally were but that wasn't without its issues: https://github.com/hashicorp/terraform/issues/11518#issuecomment-277220153

alexrudd avatar Nov 24 '17 18:11 alexrudd

What does CoreOS being acquired mean (if anything going forward)? https://www.redhat.com/en/blog/faq-red-hat-acquire-coreos

ghost avatar Mar 06 '18 01:03 ghost

I did a little bit more testing and this seems to work in tf 0.10.8 and 0.11.3 so long as you do not specify depends_on.

ghost avatar Mar 06 '18 03:03 ghost

A fix for this could be have each ignition data source generate a JSON fragment corresponding to what they define. And instead of referencing an id that might or might not be there, reference a JSON fragment, so no lookup would be required. it would make it completely backwards incompatible though.

mildred avatar Mar 29 '19 14:03 mildred

I am wondering if the error sequence does not happen with the following configuration:

  • an ignition_config data source that references two ignition_file data sources
  • ignition_file.A that depends on a resource
  • ignition_file.B that does not depends on a resource

The error sequence would be the following:

  • plan is made, ignition_config, ignition_file.A and ignition_file.B are created
  • plan is saved to file, cache disappears

then:

  • apply is started, ignition_file.A depending resource is created
  • ignition_file.A is regenerated because its depending resource has been created
  • ignition_config is regenerated because ignition_file.A
  • ignition_config being regenerated, will look for ignition_file.B in its cache to construct the complete config
  • ignition_config cannot find ignition_file.B in the cache and errors out

I'm saying that because the file id that is declared missing corresponds to the file in the tfstate that does not depends on a resource.

A workaround would be to regenerate all data sources from ignition by making them all depends of the same resource. A correct fix would involve either persisting the cache in the plan file, or not relying on a cache that can be cleaned between plan and apply.

edit: tested a workaround where all ignition data source (except ignition_config as it's not necessary) depends on a null_resource. That updates every data source on apply, and thus everything is in cache.

mildred avatar Apr 10 '19 07:04 mildred

Workaround code:

resource "null_resource" "always_trigger" {}

# For every ignition_ data source:
data "ignition_file" "cluster_config_machine_info" {
  ...

  depends_on = [ "null_resource.always_trigger" ]
}

...

mildred avatar Apr 10 '19 12:04 mildred

Just to confirm that this issue affects us in a similar scenario.

1.- Saved plan 2.- Ignition config which refers a system_unit data 3.- System_unit data does not executed so we get error like invalid systemd unit "921cdc80f72fd27057f1fd14bd0b2a656383e2da0de1b09d3d2e34eac", unknown systemd unit id

ferrandinand avatar Apr 30 '19 11:04 ferrandinand

@mildred This workaround forces the recreation of the VMs on the second execution of the terraform that is unacceptable.

IvanovOleg avatar Jun 23 '19 15:06 IvanovOleg

Guys, do you have any return? It is very annoying... I'm using Terraform 0.12.10.

galindro avatar Nov 21 '19 08:11 galindro

@galindro The whole provider should be rewritten to remove that ugly cache.

IvanovOleg avatar Nov 21 '19 08:11 IvanovOleg

Is there no workaround? In my case, I'm using ignition_config in this way. One of the parameter values is a local variable that depends on a module. If I use a null_resource as a dependency for ignition_config, it could be executed before the module...

locals {
  # Here I have a module dependency:
  teleport_docker_image = "${module.ecr.aws_ecr_repo_url}:v${var.teleport_version}"

  internal_dns_zone = "${var.aws_region}.infra.${var.dns_zone}"
  auth_route53_record = "auth_${var.app}.${local.internal_dns_zone}"
  auth_url = "https://auth.${local.internal_dns_zone}"
  ssh_url = "https://ssh.${local.internal_dns_zone}"
  k8s_url = "https://k8s.${local.internal_dns_zone}"
  auth_route53_record = "auth_${var.app}.${local.internal_dns_zone}"
}

data "ignition_systemd_unit" "teleport_proxy" {
  name    = "teleport-proxy.service"
  enabled = true

  content = templatefile(
    "./templates/proxy/teleport-proxy.service.tmpl",
    {
      app = var.app,
      teleport_docker_image = local.teleport_docker_image
    }
  )
}

data "ignition_file" "teleport_proxy_config" {
  filesystem = "root"
  path       = "/etc/teleport/teleport.yaml"
  mode       = 0400

  content {
    content = templatefile(
      "./templates/proxy/teleport.yaml.tmpl",
      {
        app = var.app,
        auth_route53_record = local.auth_route53_record,
        auth_url = local.auth_url,
        ssh_url = local.ssh_url,
        k8s_url = local.k8s_url
      }
    )
  }
}

data "ignition_file" "teleport_proxy_kubeconfig" {
  filesystem = "root"
  path       = "/etc/teleport/kubeconfig.yaml"
  mode       = 0400

  content {
    content = file("./templates/proxy/kubeconfig.yaml.tmpl")
  }
}

data "ignition_config" "teleport_proxy" {
  files = concat(
    [
      data.ignition_file.teleport_proxy_config.id,
      data.ignition_file.teleport_proxy_kubeconfig.id
    ]
  )
  systemd = [data.ignition_systemd_unit.teleport_proxy.id]
}

galindro avatar Nov 21 '19 08:11 galindro

I'll give up. I'm gonna replace it by template_cloud_config and use Ubuntu instead of Coreos.

galindro avatar Nov 21 '19 10:11 galindro

Don’t give up so easily. #56 fixed this problem. We just need a new release to make it easier to use the latest version of the provider. I just discussed this yesterday with @alexsomesan—in person, no less!

seh avatar Nov 21 '19 14:11 seh

too late... I've just migrate everything to template_cloudinit_config

galindro avatar Nov 21 '19 18:11 galindro