terraform-provider-kubernetes icon indicating copy to clipboard operation
terraform-provider-kubernetes copied to clipboard

x-kubernetes-preserve-unknown-fields in CRD causes resource recreation

Open rvdh opened this issue 3 years ago • 4 comments

Terraform Version, Provider Version and Kubernetes Version

Terraform version: 1.1.9
Kubernetes provider version: 2.14.0
Kubernetes version: 1.22

Affected Resource(s)

  • kubernetes_manifest

Terraform Configuration Files

#
# Define a StorageCluster that the Operator will make
#
resource "kubernetes_manifest" "storagecluster" {
  count = var.crds_available ? 1 : 0
  manifest = yamldecode(templatefile("${path.module}/px-storagecluster.yaml.tmpl",
    {
      cluster_name      = var.cluster_name
      portworx_version  = var.portworx_version
      image_registry    = var.image_registry
      image_pull_secret = kubernetes_secret.image_pull_secret.metadata.0.name
    }
  ))

  wait {
    fields = {
      "status.phase" = "Online"
    }
  }
}

Expected Behavior

Annotations added to the StorageCluster object without recreating the resource.

In the CRD (https://gist.github.com/rvdh/088ef343b3f7efa448448f2cc79e5df5) the spec.metadata.annotations field (line 460) is defined as type object and has the x-kubernetes-preserve-unknown-fields attribute set to true.

According to https://github.com/hashicorp/terraform-provider-kubernetes/issues/1841 this will result in the resource getting recreated. I get how that technically is "expected behaviour" - but I still feel that that's a bug.

I'm just trying to update the annotations of an object here, and I can do this using the kubernetes API without deleting and applying the manifest - so why cant the provider do the same?

Actual Behavior

Resource will be recreated:

 # module.cluster.module.portworx[0].kubernetes_manifest.storagecluster[0] must be replaced
-/+ resource "kubernetes_manifest" "storagecluster" {
      ~ manifest = {
        ... 
          ~ spec       = {
              ...
              ~ metadata                        = {
                  ~ annotations = null -> {
                      + "service/portworx-service" = {
                          + "co.elastic.monitor/hosts"                                     = "${data.host}:9001"
                          + "co.elastic.monitor/name"                                      = "${data.kubernetes.node.labels.kubernetes_io/clustername}-${data.kubernetes.service.name}"
                          + "co.elastic.monitor/processors.add_observer_metadata.geo.name" = "${data.kubernetes.node.labels.kubernetes_io/clustername}"
                          + "co.elastic.monitor/schedule"                                  = "@every 30s"
                          + "co.elastic.monitor/type"                                      = "tcp"
                        }
                    } # forces replacement
                  - labels      = null -> null
                }
              ...
                # (9 unchanged elements hidden)
            }
            # (2 unchanged elements hidden)
        }
      ~ object   = {
          ~ metadata   = {
...
              ~ metadata                        = {
                  ~ annotations = null -> {
                      + "service/portworx-service" = {
                          + "co.elastic.monitor/hosts"                                     = "${data.host}:9001"
                          + "co.elastic.monitor/name"                                      = "${data.kubernetes.node.labels.kubernetes_io/clustername}-${data.kubernetes.service.name}"
                          + "co.elastic.monitor/processors.add_observer_metadata.geo.name" = "${data.kubernetes.node.labels.kubernetes_io/clustername}"
                          + "co.elastic.monitor/schedule"                                  = "@every 30s"
                          + "co.elastic.monitor/type"                                      = "tcp"
                        }
                    }
                  ~ labels      = null -> (known after apply)
                }
...
        }

      + wait {
          + fields = {
              + "status.phase" = "Online"
            }
        }
    }

References

  • https://github.com/hashicorp/terraform-provider-kubernetes/issues/1841

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

rvdh avatar Dec 06 '22 15:12 rvdh

This is expected behaviour.

When x-kubernetes-preserve-unknown-fields attribute is present in a CRD, it signals that the CRD does not contain complete schema information for the object it describes. Most of the times this is due to the CRD authors not defining the schema in all detail to begin with. Without this information, the provider cannot guarantee that the attributes marked with this flag will not change their type as it get mutated by clients, which would be considered a fatal violation by Terraform. To prevent this, the provider disables updates to objects of such CRDs and instead it replaces it every time there is a change.

I hope that explains it clearly enough.

alexsomesan avatar Dec 07 '22 14:12 alexsomesan

@alexsomesan Thanks for the explanation. I get that it's expected behaviour, I'm just saying it's not the right behaviour :)

We're talking about annotations (and labels I guess) here - anybody can add annotations to the manifest. The CRD authors obviously can't add every single possible annotation to the CRD because it's the users that define them. In this case for example co.elastic.monitor/hosts.

The provider changes the definition to be as such:

                        "annotations": [
                            "object",
                            {
                              "service/portworx-service": [
                                "object",
                                {
                                  "co.elastic.monitor/hosts": "string",
                                  "co.elastic.monitor/name": "string",
                                  "co.elastic.monitor/processors.add_observer_metadata.geo.name": "string",
                                  "co.elastic.monitor/schedule": "string",
                                  "co.elastic.monitor/type": "string"
                                }
                              ]
                            }
                          ]

I hope you can agree that a new annotation can't be a reason for destroying and recreating the object?

rvdh avatar Dec 07 '22 14:12 rvdh

@alexsomesan Thank you for the clarification.

I faced the same issue but in another CRD. Is there any workaround for this behavior? At least temporary. Or it definitely need to ask CRD developers? It works well on 2.9.0 provider version, but any higher version start to want to recreate resource.

Sviatik avatar Jun 04 '23 15:06 Sviatik

Same here, i'm facing a problem with Terraform destroying storage-based resources when changing a configuration that should be updated-in-place. Is there any way to prevent Terraform from destroying the resources?

erezblm avatar Oct 26 '23 07:10 erezblm