terraform-provider-kubernetes
terraform-provider-kubernetes copied to clipboard
Using kubernetes_config_map_v1_data with kubernetes_manifest silently strips the configmap labels
Terraform Version, Provider Version and Kubernetes Version
Terraform version: v1.5.5
Kubernetes provider version: v2.22.0
Kubernetes version: 1.24
Affected Resource(s)
- kubernetes_manifest
- kubernetes_config_map_v1_data
Terraform Configuration Files
resource "kubernetes_manifest" "my_configmap" {
manifest = {
"apiVersion" = "v1"
"kind" = "ConfigMap"
"metadata" = {
"labels" = {
"label" = "value" # Please notice this label
}
"name" = "my-configmap"
"namespace" = "default"
}
}
}
resource "kubernetes_config_map_v1_data" "my_cm_data" {
metadata {
name = "my-configmap"
namespace = "default"
}
force = true
depends_on = [
kubernetes_manifest.my_configmap
]
data = {
"foo" = "bar"
}
}
provider "kubernetes" {
config_path = "~/.kube/config"
config_context = "name_of_context"
}
Debug Output
I may provide this later.
Panic Output
n/a
Steps to Reproduce
- Use the configuration above
- execute these steps:
terraform init -upgrade
terraform plan -out tf.plan
terraform apply "tf.plan"
Expected Behavior
What should have happened?
The configmap including its labels should have been created as per the plan. The terraform plan is:
# kubernetes_config_map_v1_data.my_cm_data will be created
+ resource "kubernetes_config_map_v1_data" "my_cm_data" {
+ data = {
+ "foo" = "bar"
}
+ field_manager = "Terraform"
+ force = true
+ id = (known after apply)
+ metadata {
+ name = "my-configmap"
+ namespace = "default"
}
}
# kubernetes_manifest.my_configmap will be created
+ resource "kubernetes_manifest" "my_configmap" {
+ manifest = {
+ apiVersion = "v1"
+ kind = "ConfigMap"
+ metadata = {
+ labels = {
+ label = "value"
}
+ name = "my-configmap"
+ namespace = "default"
}
}
+ object = {
+ apiVersion = "v1"
+ binaryData = (known after apply)
+ data = (known after apply)
+ immutable = (known after apply)
+ kind = "ConfigMap"
+ metadata = {
+ annotations = (known after apply)
+ clusterName = (known after apply)
+ creationTimestamp = (known after apply)
+ deletionGracePeriodSeconds = (known after apply)
+ deletionTimestamp = (known after apply)
+ finalizers = (known after apply)
+ generateName = (known after apply)
+ generation = (known after apply)
+ labels = (known after apply)
+ managedFields = (known after apply)
+ name = "my-configmap"
+ namespace = "default"
+ ownerReferences = (known after apply)
+ resourceVersion = (known after apply)
+ selfLink = (known after apply)
+ uid = (known after apply)
}
}
}
Plan: 2 to add, 0 to change, 0 to destroy.
Actual Behavior
What actually happened?
The configmap was created but all labels are missing
kubectl get cm my-configmap -o yaml
apiVersion: v1
data:
foo: bar
kind: ConfigMap
metadata:
creationTimestamp: "2023-08-10T14:51:43Z"
name: my-configmap
namespace: default
resourceVersion: "55932"
uid: 90036280-fe07-4147-bf2b-ab75f60b6075
Important Factoids
The labels are still referenced in the statefile and running a new plan/apply does not detect that labels are missing and does not reapply them to the configmap. They need to be added manually.
terraform state show kubernetes_manifest.my_configmap
# kubernetes_manifest.my_configmap:
resource "kubernetes_manifest" "my_configmap" {
manifest = {
apiVersion = "v1"
kind = "ConfigMap"
metadata = {
labels = {
label = "value"
}
name = "my-configmap"
namespace = "default"
}
}
object = {
apiVersion = "v1"
binaryData = null
data = null
immutable = null
kind = "ConfigMap"
metadata = {
annotations = null
clusterName = null
creationTimestamp = null
deletionGracePeriodSeconds = null
deletionTimestamp = null
finalizers = null
generateName = null
generation = null
labels = {
"label" = "value"
}
managedFields = null
name = "my-configmap"
namespace = "default"
ownerReferences = null
resourceVersion = null
selfLink = null
uid = null
}
}
}
References
n/a
Community Note
- Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
- If you are interested in working on this issue or have submitted a pull request, please leave a comment
Hi @plallin,
Thank you for reporting this issue. It happens during the patch operation on a Config Map object. Here we pick up only name and namespace and don't take into account annotations and labels that an object can have. As a result, they get overwritten.
This seems to be a relatively simple change. I will discuss this with my colleagues to see if I am missing something and if all looks good, I will raise a PR to address this issue.
Thanks!
Hi @plallin,
I went through this issue one more time today and I think we don't need to change our code. The reason why labels get wiped out is the field manager's name. By default, the field manager name is equal to Terraform and used by both resources from your example. That cause the labels to overwrite when kubernetes_config_map_v1_data is applied. In order to avoid this, you can change the manager name via the option field_manager.
For example:
resource "kubernetes_config_map_v1_data" "my_cm_data" {
metadata {
name = "my-configmap"
namespace = "default"
}
force = true
depends_on = [
kubernetes_manifest.my_configmap
]
data = {
"foo" = "bar"
}
field_manager = "TerraformConfigMap"
}
Alternatively, you can simplify your code by moving the data block to the kubernetes_manifest resource, instead of managing it by a separate resource:
resource "kubernetes_manifest" "my_configmap" {
manifest = {
"apiVersion" = "v1"
"kind" = "ConfigMap"
"metadata" = {
"labels" = {
"label" = "value"
}
"name" = "my-configmap"
"namespace" = "default"
}
data = {
"foo" = "bar"
}
}
}
Marking this issue as stale due to inactivity. If this issue receives no comments in the next 30 days it will automatically be closed. If this issue was automatically closed and you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. This helps our maintainers find and focus on the active issues. Maintainers may also remove the stale label at their discretion. Thank you!