terraform-provider-docker
terraform-provider-docker copied to clipboard
docker_image build image even if file context and dockerfile doesn't change
Community Note
- Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
- Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
- If you are interested in working on this issue or have submitted a pull request, please leave a comment
Terraform (and docker Provider) Version
Affected Resource(s)
docker_image
Terraform Configuration Files
resource "docker_image" "this" {
name = local.ecr_image_name
build {
context = var.source_path
dockerfile = var.docker_file_path
build_args = var.build_args
platform = var.platform
}
}
Debug Output
Panic Output
Expected Behaviour
Don't build and deploy if context and dockerfile doesn't change
Actual Behaviour
Build and Deploy at each terraform apply
Steps to Reproduce
terraform apply
Important Factoids
References
- #0000
I found that the build will be retriggered if the content of (file, subfolder, etc) your context folder change, which makes sense because the provider does not know which file in the context folder will be used (e.g COPY or ADD could copy them into the image) while building the Dockerfile.
In my case the build is triggered even if the content has not changed. I suspect that it's auth_config that causes the rebuild due to changing credentials.
I've also come across this bug. Our use case was to build an image from a dockerfile in a folder src/example and push it to an AWS ECR repository. We wanted to only rebuild/push image when there were changes to the code in src/example. However, even with the triggers hash is the same, the docker_image is being rebuilt every time.
resource "docker_registry_image" "registry_image" {
name = docker_image.image.name
}
resource "docker_image" "image" {
name = "${aws_ecr_repository.repository.repository_url}:latest"
build {
context = "${path.module}/src/example"
}
triggers = {
dir_sha1 = sha1(join("", [for f in fileset(path.module, "src/example/**") : filesha1(f)]))
}
}
Our solution was to rollback to version 2.25.0 of kreuzwerker/docker. (note the above configuration needs changing for this version).
I guess quite a few of us are currently sitting on 2.25.0 because of this.
See also https://github.com/kreuzwerker/terraform-provider-docker/issues/555
@enc In addition this issue there are also #555 and #607 all the same problem might be good to have them in one place and close duplicates...
This issue described in comment by samuelcortinhas is really annoying with CI deployments because virtual machine that runs terraform starts with clean local docker registry so docker_image resource acts like it was as remotely deleted and terraform tries to recreate it running the docker build again (even though image already exists in remote repo), if we could use docker_registry_image for building and uploading images like before it would solve this resource recreation problem on every terraform apply that happens during builds. Currently this kind of optimised behaviour is not possible to achieve after changes from version >2.25.0
I opened #607.
@samuelcortinhas I had to update to v3+ becuase v2.5.0 started to throw errors out of nothing. Does it still work for you? I guess I will have to give it a try again because with this bug I am keep rebuilding stuff for nothing.
I am using AWS ECR too.
Boh, I downgraded to v2.25.0 like others and now works again...
I guess we will never update it. :D
That's weird, I have the opposite problem #647 The change is detected fine but the build it's not triggered.
I am using the terraform-aws-modules/lambda/aws module for creating lambdas and the docker-build module for creating the docker image.
I had the same problem, fixed it with downloading the docker-build module and adding it directly in my project, changing the version for "kreuzwerker/docker" to "2.25.0" and modifying the main.tf to use the docker_registry_image resource instead of the docker_image.
data "aws_region" "current" {}
data "aws_caller_identity" "this" {}
locals {
ecr_address = coalesce(var.ecr_address, format("%v.dkr.ecr.%v.amazonaws.com", data.aws_caller_identity.this.account_id, data.aws_region.current.name))
ecr_repo = var.create_ecr_repo ? aws_ecr_repository.this[0].id : var.ecr_repo
image_tag = var.use_image_tag ? coalesce(var.image_tag, formatdate("YYYYMMDDhhmmss", timestamp())) : null
ecr_image_name = var.use_image_tag ? format("%v/%v:%v", local.ecr_address, local.ecr_repo, local.image_tag) : format("%v/%v", local.ecr_address, local.ecr_repo)
}
# resource "docker_image" "this" {
# name = local.ecr_image_name
# build {
# context = var.source_path
# dockerfile = var.docker_file_path
# build_args = var.build_args
# platform = var.platform
# }
# force_remove = var.force_remove
# keep_locally = var.keep_locally
# triggers = var.triggers
# }
resource "docker_registry_image" "this" {
name = local.ecr_image_name
keep_remotely = var.keep_remotely
build {
context = var.source_path
dockerfile = var.docker_file_path
build_args = var.build_args
platform = var.platform
}
triggers = var.triggers
}
......
And it worked perfectly. Hope it helps someone.
@kiril-pcg @enc Thanks for your response. I will try your solution even if a permanent fix will be better
Hey, I tried to build a reproducible case, but, as always, it works for me.
Here is my full code:
resource "docker_image" "this" {
name = "${aws_ecr_repository.foo.repository_url}:latest"
build {
context = "${path.module}/src"
}
}
resource "docker_registry_image" "registry_image" {
name = docker_image.this.name
}
resource "aws_ecr_repository" "foo" {
name = "bar"
image_tag_mutability = "MUTABLE"
image_scanning_configuration {
scan_on_push = false
}
}
data "aws_ecr_authorization_token" "token" {}
data "aws_caller_identity" "current" {}
data "aws_region" "current" {}
provider "docker" {
registry_auth {
address = "${data.aws_caller_identity.current.account_id}.dkr.ecr.${data.aws_region.current.name}.amazonaws.com"
username = data.aws_ecr_authorization_token.token.user_name
password = data.aws_ecr_authorization_token.token.password
}
}
When I add a triggers block to my docker_image, it changes whenever I change something inside src folder.
I am running on Linux with terraform 1.11.4
Hi @Junkern, Thanks for trying to reproduce.
This is my code :
module "docker_image_lambda_kpi_generator" {
source = "terraform-aws-modules/lambda/aws//modules/docker-build"
version = "7.20.1"
create_ecr_repo = true
ecr_repo = "${local.project_name}-kpi-gen-${var.env_name}"
image_tag = "${local.hash_lambda_kpi_generator}${local.hash_layer_utils}"
docker_file_path = "lambdas/kpi_generator/Dockerfile"
source_path = "${path.module}/.."
scan_on_push = true
triggers = { # rebuild only when one of this files changes
dir_lambda = local.hash_lambda_kpi_generator,
dir_layer = local.hash_layer_utils
}
ecr_repo_lifecycle_policy = jsonencode({
"rules" : [
{
"rulePriority" : 1,
"description" : "Keep only the last 2 images",
"selection" : {
"tagStatus" : "any",
"countType" : "imageCountMoreThan",
"countNumber" : 2
},
"action" : {
"type" : "expire"
}
}
]
})
}
docker_image tries to replace every time even if I have no changes in my files. Here is my plan every time :
# module.docker_image_lambda_kpi_generator.docker_image.this will be created
+ resource "docker_image" "this" {
+ force_remove = false
+ id = (known after apply)
+ image_id = (known after apply)
+ keep_locally = false
+ name = "423671310539.dkr.ecr.eu-west-1.amazonaws.com/vwt-kpi-gen-dev:a845c24f0af573ffc0f6bed2fe6da82019690def608b93ba5984534e70e4b563a34ac9c23edd61b7"
+ repo_digest = (known after apply)
+ triggers = {
+ "dir_lambda" = "a845c24f0af573ffc0f6bed2fe6da82019690def"
+ "dir_layer" = "608b93ba5984534e70e4b563a34ac9c23edd61b7"
}
+ build {
+ cache_from = []
+ context = "./.."
+ dockerfile = "lambdas/kpi_generator/Dockerfile"
+ extra_hosts = []
+ remove = true
+ security_opt = []
+ tag = []
}
}
@IlyesDemineExtVeolia Thanks for the code! Here are my thoughts:
- The terraform output does not show a
recreate/replacebut rather acreate. Maybe you copied a different output? - Because it is a
createit also does not show which attribute exactly triggers the "change". And can you also post the full examples with all thelocals? Because those locals are passed as a trigger, thus are quite important to determine whether to rebuild or not. - ~~The
contextis in a parent directory. Are you sure that this directory does not contain any files which change frequently? E.g. lockfile (of terraform or other package managers). If the context directory contains the terraform state lock file it of course will trigger changes with everyterraformrun~~
@Junkern The create happens because Terraform sees the local docker image as deleted, since it is executed from a new local environment where the local docker image does not exist between CI workflow executions.
I am experiencing the same issue. While it is logical that this happens, there should be a better fix or feature to suggest rather than downgrading to a previous version which in time will probably contain vulnerabilities.
From what I can tell from the changelog from v2.25.0 to v3.0.0, this is happening due to image build now being limited to the docker_image resource only, making the local docker image the entity Terraform will look for to ensure its existence.
In the previous version, you could use docker_registry_image to perform a build AND persist it to a Docker registry. The existence of the image in the Docker repository would then be enough to satisfy Terraform, and a new build shouldnt be triggered even with missing local image.
Was this idea too hastily decided in (or related to) https://github.com/kreuzwerker/terraform-provider-docker/issues/458?
The issue is also well described in https://github.com/kreuzwerker/terraform-provider-docker/issues/555
From what I can tell from the changelog from v2.25.0 to v3.0.0, this is happening due to image build now being limited to the
docker_imageresource only, making the local docker image the entity Terraform will look for to ensure its existence.In the previous version, you could use
docker_registry_imageto perform a build AND persist it to a Docker registry. The existence of the image in the Docker repository would then be enough to satisfy Terraform, and a new build shouldnt be triggered even with missing local image.Was this idea too hastily decided in (or related to) #458?
The issue is also well described in #555
Token bump for attention