terraform-provider-docker icon indicating copy to clipboard operation
terraform-provider-docker copied to clipboard

Provider produced inconsistent final plan

Open areed1192 opened this issue 3 years ago • 10 comments
trafficstars

Terraform (and docker Provider) Version

Terraform version 1.2.2 and kreuzwerker version 2.16.0

Affected Resource(s)

  • docker_build

Terraform Configuration Files

module "lambda_function_from_container_image" {

  # This is using the Terraform module for lambda functions.
  source = "terraform-aws-modules/lambda/aws"

  ##################
  # Function Args
  ##################
  function_name                     = "bi-pipeline-workflow-${var.name}"
  description                       = var.description
  create_package                    = false
  lambda_role                       = var.role
  timeout                           = var.timeout
  memory_size                       = var.memory_size
  create_role                       = var.role == "" ? true : false
  attach_policy                     = var.role == "" ? false : true
  vpc_subnet_ids                    = lookup(var.vpc_subnet_ids, var.environment)
  cloudwatch_logs_retention_in_days = 90
  vpc_security_group_ids            = lookup(var.vpc_security_group_ids, var.environment)
  tags                              = var.tags

  ##################
  # Container Image
  ##################
  # image_uri    = var.docker_image
  image_uri    = var.docker_image != "" ? var.docker_image : module.docker_image.image_uri
  package_type = "Image"

}

# This portion creats a new ECR Repo.
module "docker_image" {

  ##################
  # Docker Args
  ##################
  source           = "terraform-aws-modules/lambda/aws//modules/docker-build"
  ecr_repo         = "bi-pipeline-images"
  create_ecr_repo  = false
  image_tag        = "${var.name}"
  source_path      = var.source_path
  docker_file_path = var.folder
  build_args       = var.docker_build_args
  ecr_repo_tags    = var.tags

}

##################
# CRON TRIGGER
##################

# Define our CRON Rule for the CloudWatch Event.
resource "aws_cloudwatch_event_rule" "cron" {

  # Here I just want to note that not everyone wants to have it on a schedule.
  # This will only create a schedule if the user specifies a cron expression.
  count               = var.cron_expression != "" ? 1 : 0
  name                = "${var.name}-cron"
  description         = "Sends event to ${module.lambda_function_from_container_image.lambda_function_name} cron based."
  schedule_expression = var.cron_expression
  tags                = var.tags

}

# Define the target, in this case our Lambda Function.
resource "aws_cloudwatch_event_target" "lambda" {

  count     = var.cron_expression != "" ? 1 : 0
  target_id = "runLambda"
  rule      = aws_cloudwatch_event_rule.cron[count.index].name
  arn       = module.lambda_function_from_container_image.lambda_function_arn

}

# Make sure we have the proper permissions.
resource "aws_lambda_permission" "cloudwatch" {

  count         = var.cron_expression != "" ? 1 : 0
  statement_id  = "AllowExecutionFromCloudWatch"
  action        = "lambda:InvokeFunction"
  function_name = module.lambda_function_from_container_image.lambda_function_arn
  principal     = "events.amazonaws.com"
  source_arn    = aws_cloudwatch_event_rule.cron[count.index].arn

}

Debug Output

│ Error: Provider produced inconsistent final plan
│ 
│ When expanding the plan for module.qualtircs_workflow_deployment.module.docker_image.docker_registry_image.this to include new values learned so far during apply, provider
│ "registry.terraform.io/kreuzwerker/docker" produced an invalid new value for .build[0].context: was
│ cty.StringVal("../../../../ews-bi-pipelines/:74f67bda95215e10f66cd5ccc80617dfa33669b76cf3817b6e9210faad6dbf25"), but now
│ cty.StringVal("../../../../ews-bi-pipelines/:4928478ca49e75cc85bd3be83cdee8116edfcb9f70677cab5ff175fa5a36783c").
│ 
│ This is a bug in the provider, which should be reported in the provider's own issue tracker.

Gist Link

Terraform TXT Log File

Expected Behaviour

I would expect the docker image to be built and pushed to AWS.

Actual Behaviour

The docker image is only built for like 40 seconds and then fails returning the output listed above and below.

Steps to Reproduce

  1. terraform init
  2. terraform plan
  3. terraform apply

areed1192 avatar Jun 15 '22 22:06 areed1192

Thanks for the issue! Due to limited availability we are currently focusing on issues which have been open for a long time and have many "upvotes". But it can very well be that working on those issues will also fix your problem :)

Junkern avatar Jun 22 '22 11:06 Junkern

This is also affecting the latest provider version 2.17.0 and Terraform 1.2.4.

sreboot avatar Jul 11 '22 12:07 sreboot

This is also failing with the latest v2.18.0 provider version.

sreboot avatar Jul 12 '22 15:07 sreboot

Thanks for the updates! I assume you are also using the terraform-aws-modules/lambda/aws//modules/docker-build module? Could you post your values for source_path and docker_file_path?

Junkern avatar Jul 12 '22 15:07 Junkern

In our case this only seem to happen with state updates, where an existing container is replaced with a new image and the sha256 is recomputed for state modification.

module.consul-exporter.docker_container.instance[0]: Still destroying... [id=95068c1e87714aa38ae0955c2af963def6186004a7d444cd9a644767910e0fe6, 10s elapsed]
module.consul-exporter.docker_container.instance[0]: Still destroying... [id=95068c1e87714aa38ae0955c2af963def6186004a7d444cd9a644767910e0fe6, 20s elapsed]
module.consul-exporter.docker_container.instance[0]: Still destroying... [id=95068c1e87714aa38ae0955c2af963def6186004a7d444cd9a644767910e0fe6, 30s elapsed]
module.consul-exporter.docker_container.instance[0]: Destruction complete after 36s
module.consul-exporter.docker_image.myimage: Modifying... [id=sha256:83c33a3c475b756146a1959440646e9d0ac1d5244227e712083bb38bdced7f44prom/consul-exporter:v0.7.0]
module.consul-exporter.docker_image.myimage: Modifications complete after 9s [id=sha256:b021157941ca521149329d8ae68ce82a0a303d72825ba0d3a128c068dffa14ccprom/consul-exporter:v0.8.0]
╷
│Error: Provider produced inconsistent final plan
│
│When expanding the plan for
│module.consul-exporter.docker_container.instance[0] to include new values
│learned so far during apply, provider
│"registry.terraform.io/kreuzwerker/docker" produced an invalid new value
│for .image: was
│cty.StringVal("sha256:83c33a3c475b756146a1959440646e9d0ac1d5244227e712083bb38bdced7f44"),
│but now
│cty.StringVal("sha256:b021157941ca521149329d8ae68ce82a0a303d72825ba0d3a128c068dffa14cc").
│
│This is a bug in the provider, which should be reported in the provider's
│own issue tracker.
╵
Error: Terraform exited with code 1.
Error: Process completed with exit code 1.

sreboot avatar Jul 12 '22 15:07 sreboot

Thanks for the updates! I assume you are also using the terraform-aws-modules/lambda/aws//modules/docker-build module? Could you post your values for source_path and docker_file_path?

Actually no, we don't. We use this:

locals {
  shortid = substr(uuid(), 0, 8)
  mod_labels = ["com.docker.swarm.affinities", "triton.cns.services"]
}

resource "docker_container" "instance" {
  name       = "${var.hostname}${format("%02d", count.index + 1)}-${substr(uuidv5("dns", "${var.hostname}${format("%02d", count.index + 1)}${local.shortid}"), 0, 8)}"
  hostname   = "${var.hostname}${format("%02d", count.index + 1)}-${substr(uuidv5("dns", "${var.hostname}${format("%02d", count.index + 1)}${local.shortid}"), 0, 8)}"
  image      = docker_image.myimage.latest
  count      = var.instances
  must_run   = true
  restart    = "always"
  entrypoint = var.entrypoint
  command    = var.command
  log_driver = var.log_driver
  log_opts   = var.log_opts

  dynamic "labels" {
    for_each = [for l in var.labels : {
      label = l.label
      value = l.value
      } if !contains(local.mod_labels, l.label)
    ]

    content {
      label = labels.value.label
      value = labels.value.value
    }
  }

  labels {
    label = "com.docker.swarm.affinities"
    value = var.affinity_group != null ? "[\"affinity_group==${var.affinity_group}${format("%02d", count.index + 1)}\"]" : var.labels.com_docker_swarm_affinities.value
  }

  labels {
    label = "triton.cns.services"
    value = join(",", [var.labels.triton_cns_services.value, "${var.hostname}${format("%02d", count.index + 1)}"])
  }

  env = setunion(["COUNT=${format("%02d", count.index + 1)}", "FHOSTNAME=${var.hostname}${format("%02d", count.index + 1)}"], var.env)

  dynamic "ports" {
    for_each = var.ports

    content {
      internal = ports.key
      external = ports.key
    }
  }

  dynamic "upload" {
    for_each = var.upload_files == null ? {} : var.upload_files

    content {
      content    = upload.value.local_file
      file       = upload.value.remote_file
      executable = upload.value.executable
    }
  }

  lifecycle {
    ignore_changes = [name, hostname]
  }
}

data "docker_registry_image" "myimage" {
  name = var.image
}

resource "docker_image" "myimage" {
  name          = data.docker_registry_image.myimage.name
  keep_locally  = true
  pull_triggers = ["data.docker_registry_image.myimage.sha256_digest"]
}

output "instance_name" {
  value = docker_container.instance.*.name
}

output "ip_address" {
  value = docker_container.instance.*.ip_address
}

output "ports" {
  value = var.ports
}

output "image" {
  value = docker_image.myimage.repo_digest
}

sreboot avatar Jul 13 '22 07:07 sreboot

Ah, I just noticed something. The original post is about a inconsistent plan in docker_registry_image. Your error is about docker_container. Do you mind quickly making a new issue with "docker_container: Provider produced inconsistent final plan" as a title and posting the code you have posted here? Otherwise things will get messy^^

Junkern avatar Jul 13 '22 11:07 Junkern

@areed1192 Could you post your values for source_path and docker_file_path?

Junkern avatar Jul 13 '22 11:07 Junkern

@Junkern Here are the values you asked for:

  • source_path="../../../../ews-bi-pipelines/"
  • docker_file_path="workflows/korn_ferry_email/deploy/Base.Dockerfile"

Let me know if you need anything else.

To give you context the workflows folder lives inside the ews-bi-pipelines folder.

areed1192 avatar Jul 13 '22 14:07 areed1192

Ah, I just noticed something. The original post is about a inconsistent plan in docker_registry_image. Your error is about docker_container. Do you mind quickly making a new issue with "docker_container: Provider produced inconsistent final plan" as a title and posting the code you have posted here? Otherwise things will get messy^^

Done in #408.

sreboot avatar Jul 14 '22 12:07 sreboot

The build block of docker_registry_image is now deprecated and will be removed with the next major version. Please migrate to using the build block of the docker_image. There were many fixes done to the context attribute which should fix most of the problems. There will also be a guide for migrating from v2.x to v3.x.

Please open a new issue in case you encounter any bugs and issues!

Junkern avatar Jan 05 '23 14:01 Junkern