terraform-provider-docker
terraform-provider-docker copied to clipboard
Ports on stopped container force replacement
Looks like this may be a reoccurrence of https://github.com/hashicorp/terraform/issues/19294
λ terraform -v
Terraform v0.12.12
+ provider.docker v2.5.0
Starting with a recent upgrade of the provider (sadly I do not remember which one I've upgraded from) docker provider started forcing recreation of containers that have ports published.
Repro:
- Fire up this example container:
resource "docker_container" "btsync" {
image = "resilio/sync"
name = "btsync"
capabilities {
add = ["NET_ADMIN"]
}
ports {
internal = 8888
external = 8888
}
ports {
internal = 55555
}
log_driver = "json-file"
log_opts = {
max-size = "10m"
max-file = 3
}
volumes {
host_path = "/etc/localtime"
container_path = "/etc/localtime"
read_only = true
}
restart= "on-failure"
max_retry_count = 3
# Do not ensure that the container is running
must_run="false"
}
-
terraform apply
- ssh onto the host
-
docker kill btsync
-
terraform plan
EXPECTED BEHAVIOUR:
- No changes need to be applied as
must_run
is set to false ACTUAL BEHAVIOUR: - Container wants to be recreated:
+ ports { # forces replacement
+ external = 8888 # forces replacement
+ internal = 8888 # forces replacement
+ ip = "0.0.0.0" # forces replacement
+ protocol = "tcp" # forces replacement
}
+ ports { # forces replacement
+ external = (known after apply)
+ internal = 55555 # forces replacement
+ ip = "0.0.0.0" # forces replacement
+ protocol = "tcp" # forces replacement
}
Note that this doesn't happen with another container that's also stopped but doesn't have ports defined, here's the example config:
resource "docker_container" "speedtest_exporter" {
image = "nlamirault/speedtest_exporter"
name = "speedtest"
hostname = "speedtest"
networks_advanced {
name = docker_network.homelab.name
aliases = ["speedtest", "speedtest.docker"]
}
volumes {
host_path = "/etc/localtime"
container_path = "/etc/localtime"
read_only = true
}
restart= "on-failure"
max_retry_count = 3
# Do not ensure that the container needs to be running
must_run="false"
}
Yes, we need to revisit this in #138 which will probably be a BC. Sorry but the feature of #103 causes too much trouble. It is very likely a bug in the plugin-sdk: https://github.com/hashicorp/terraform-plugin-sdk/issues/195
@mavogel Thanks for the quick response!
Just to check, what does ‘BC’ mean in your context?
Sorry for the abbreviations: BC -> Breaking Change
I'm struggling with this on my running containers, it forces me to re-create the container for each terraform run. It also wants to replace my volumes as well.
Terraform v0.12.19
- provider.docker v2.6.0
I just started learning Terraform yesterday and my instructor uses v 0.11 but I am using 0.12. The first bulk of lessons focus on docker, and with each $ terraform apply (even without any change to my *.tf file) the container is replaced with a new one although the previous container was working fine. In my opinion this kills the concept of idempotency in infrastructure as code.
Versions: Terraform v0.12.24
- provider.docker v2.7.0
You're right @etattw it is currently a bug in v2.7.0
and not intended to replace to container each time although the tf
did not change. See #242