terraform-provider-kubernetes
terraform-provider-kubernetes copied to clipboard
volume - config_map is being recreated at every apply
Terraform Version, Provider Version and Kubernetes Version
Terraform version: 1.4.6
Kubernetes provider version: v2.23.0
Kubernetes version: 1.27.3-gke.100
Terraform Configuration Files
resource "kubernetes_deployment_v1" "kube-test" {
...
volume_mount {
name = "configs-secrets"
mount_path = var.configs_secrets_dir
read_only = true
}
volume {
name = "configs-secrets"
projected {
sources {
secret {
name = kubernetes_secret.job-secrets.metadata[0].name
optional = false
dynamic "items" {
for_each = var.k8s-secrets
content {
key = items.value
path = "${items.value}/${items.value}.json"
}
}
}
config_map {
name = kubernetes_config_map.job-configs.metadata[0].name
optional = true
dynamic "items" {
for_each = setunion(toset(var.k8s-required-configmaps))
content {
key = items.key
path = "${items.key}/${items.key}.json"
}
}
}
}
}
}
}
Expected Behavior
Show no changes
Actual Behavior
# kubernetes_deployment_v1.kube-test will be updated in-place
~ resource "kubernetes_deployment_v1" "kube-test" {
id = "default/kube-test"
# (1 unchanged attribute hidden)
~ spec {
# (5 unchanged attributes hidden)
~ template {
~ spec {
# (12 unchanged attributes hidden)
~ volume {
name = "configs-secrets"
~ projected {
# (1 unchanged attribute hidden)
~ sources {
+ config_map {
+ name = "job-configs"
+ optional = true
+ items {
+ key = "test"
+ path = "test/test.json"
}
...
}
- sources {
- config_map {
- name = "job-configs" -> null
- optional = false -> null
- items {
- key = "test" -> null
- path = "test/test.json" -> null
}
...
I seem to have an issue that at first seemed similar to the #1835 but if I have understood it correctly, their problem was using service account name.
@TrimPeachu is this resource part of a module?
How is the value of var.k8s-required-configmaps
being set? Is kubernetes_config_map.job-configs
also dependant on that value?
Hi @alexsomesan ,
This resource is not part of a module.
var.k8s-required-configmaps
is defined as so:
variable "k8s-required-configmaps" {
default = [
"test.A",
"test.B"
]
}
And correct, kubernetes_config_map.job-configs
is also depended on var.k8s-required-configmaps
resource "kubernetes_config_map" "job-configs" {
provider = kubernetes
metadata {
name = "job-configs"
}
data = merge(
{ for config in var.k8s-required-configmaps : config => file("${var.jobs_configs_dir}/${config}.json") },
{ for config in local.k8s-optional-configmaps : config => file("${var.jobs_configs_dir}/${config}.json") if fileexists("${var.jobs_configs_dir}/${config}.json") }
)
}
However upon testing, same unwanted result occurs even if I use sth like this:
resource "kubernetes_config_map" "job-configs" {
provider = kubernetes
metadata {
name = "job-configs"
}
data = {
"test1" = "testA"
"test2" = "testB"
"test3" = "testC"
}
}
Hi, is the info I have provided sufficient @alexsomesan or is there something more I can provide so you are able to assist me with this issue?
Thanks :))