terraform-provider-kubernetes icon indicating copy to clipboard operation
terraform-provider-kubernetes copied to clipboard

Error: Failed to morph manifest to OAPI type issue with toggleKey attribute after AKS update from 1.21.9 -> 1.22.6

Open hyha0310 opened this issue 2 years ago • 5 comments

Terraform Version, Provider Version and Kubernetes Version

Terraform version: v1.0.4
Kubernetes provider version: v2.10.0
Kubernetes version: v1.22.6

Affected Resource(s)

  • kubernetes_manifest (for Service Object)

Terraform Configuration Files

resource "kubernetes_manifest" "service_application_1" {
  manifest = {
    "apiVersion" = "v1"
    "kind" = "Service"
    "metadata" = {
      "name" = "${var.application_1_name}"
      "namespace" = "${var.application_1_name}"
    }
    "spec" = {
      "ports" = [
        {
          "port" = var.application_1_k8s_service_port
          "targetPort" = var.application_1_k8s_service_target_port 
          "protocol" = "TCP"
        },
      ]
      "selector" = {
        "app" = "${kubernetes_manifest.deployment_application_1.object.spec.template.metadata.labels.app}" # reference "deployment" label
      }
      "type" = "ClusterIP"
    }
  }
  depends_on = [
    kubernetes_manifest.deployment_application_1,
  ]
}

Debug Output

Panic Output

╷ │ Error: Failed to morph manifest to OAPI type │ │ with module.k8s-deployment-devportal.kubernetes_manifest.service_application_1, │ on ../../modules/k8s-deployment-devportal/main.tf line 607, in resource "kubernetes_manifest" "service_application_1": │ 607: resource "kubernetes_manifest" "service_application_1" { │ │ AttributeName("spec"): [AttributeName("spec")] failed to morph object │ element into object element: │ AttributeName("spec").AttributeName("topologyKeys"): │ [AttributeName("spec").AttributeName("topologyKeys")] failed to morph │ object element into object element: │ AttributeName("spec").AttributeName("topologyKeys"): type is nil ╵

Steps to Reproduce

  1. terraform apply

Expected Behavior

What should have happened? Terraform apply completed without issue.

Actual Behavior

What actually happened? Terraform plan fails with above Error.

Important Factoids

AKS cluster auto upgraded from v1.21.9 -> v1.22.6 @20/04/2022, 6:50:05 am NZST

  • no terraform provider version change since last successful terraform apply.

References

  • #1702

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

hyha0310 avatar Apr 26 '22 03:04 hyha0310

The topologyKeys attribute is only available in the Service resource schema when the corresponding feature flag is enabled on the cluster. Explained here: https://kubernetes.io/docs/concepts/services-networking/service-topology/#using-service-topology

As it seems, this feature has also been deprecated starting in K8s version 1.21: Explained here: https://kubernetes.io/docs/tasks/administer-cluster/enabling-service-topology/

It has been replaced with "Topology Aware Hints", as explained here: https://kubernetes.io/docs/concepts/services-networking/topology-aware-hints/

alexsomesan avatar Apr 27 '22 16:04 alexsomesan

Thanks for the reply @alexsomesan. As you can see in the tf configuration, we don't define any topologyKeys in our kubernetes_manifest, yet terrafrom plan returns an error specified above.

Would you have any workarounds or solutions for this issue?

hyha0310 avatar Apr 27 '22 21:04 hyha0310

I would have to ask for the state of that resource as output by terraform state show <resource-path>. Also see the conversation on the other issue linked here.

alexsomesan avatar Apr 28 '22 11:04 alexsomesan

@alexsomesan The state of the resource seem to have retained the topologyKeys from the previous cluster version:

# module.k8s-deployment-helloworld.kubernetes_manifest.service_application_1:
resource "kubernetes_manifest" "service_application_1" {
    manifest = {
        apiVersion = "v1"
        kind       = "Service"
        metadata   = {
            name      = "dt00helloworld"
            namespace = "dt00helloworld"
        }
        spec       = {
            ports    = [
                {
                    port       = 80
                    protocol   = "TCP"
                    targetPort = 80
                },
            ]
            selector = {
                app = "dt00helloworld"
            }
            type     = "ClusterIP"
        }
    }
    object   = {
        apiVersion = "v1"
        kind       = "Service"
        metadata   = {
            annotations                = null
            clusterName                = null
            creationTimestamp          = null
            deletionGracePeriodSeconds = null
            deletionTimestamp          = null
            finalizers                 = null
            generateName               = null
            generation                 = null
            labels                     = null
            managedFields              = null
            name                       = "dt00helloworld"
            namespace                  = "dt00helloworld"
            ownerReferences            = null
            resourceVersion            = null
            selfLink                   = null
            uid                        = null
        }
        spec       = {
            allocateLoadBalancerNodePorts = null
            clusterIP                     = "10.0.233.21"
            clusterIPs                    = [
                "10.0.233.21",
            ]
            externalIPs                   = null
            externalName                  = null
            externalTrafficPolicy         = null
            healthCheckNodePort           = null
            internalTrafficPolicy         = null
            ipFamilies                    = [
                "IPv4",
            ]
            ipFamilyPolicy                = "SingleStack"
            loadBalancerClass             = null
            loadBalancerIP                = null
            loadBalancerSourceRanges      = null
            ports                         = [
                {
                    appProtocol = null
                    name        = null
                    nodePort    = null
                    port        = 80
                    protocol    = "TCP"
                    targetPort  = "80"
                },
            ]
            publishNotReadyAddresses      = null
            selector                      = {
                "app" = "dt00helloworld"
            }
            sessionAffinity               = "None"
            sessionAffinityConfig         = {
                clientIP = {
                    timeoutSeconds = null
                }
            }
            topologyKeys                  = null
            type                          = "ClusterIP"
        }
    }
}

hyha0310 avatar Apr 28 '22 12:04 hyha0310

This issue should be resolved with the changes proposed here: https://github.com/hashicorp/terraform-provider-kubernetes/pull/1780

alexsomesan avatar Jul 19 '22 10:07 alexsomesan

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

github-actions[bot] avatar Oct 17 '22 02:10 github-actions[bot]