terraform-provider-kubernetes-alpha icon indicating copy to clipboard operation
terraform-provider-kubernetes-alpha copied to clipboard

Wrong final value type after 0.5.0 upgrade

Open Lexmark-peachj opened this issue 3 years ago • 4 comments

Terraform, Provider, Kubernetes versions

Terraform version: v0.15.5
Provider version: 0.5.0
Kubernetes version: v1.20.7

Affected Resource(s)

  • kubernetes_manifest

Terraform Configuration Files

variable "kubeconfig" {
  type        = string
  description = "Path to the temporary kubeconfig file"
}

provider "kubernetes-alpha" {
  config_path = var.kubeconfig
}

resource "kubernetes_manifest" "lcs-sli" {
  provider = kubernetes-alpha
  manifest = {
    apiVersion = "monitoring.coreos.com/v1"
    kind       = "PrometheusRule"
    metadata = {
      labels = {
        app     = "prometheus-operator"
        release = "prometheus-operator"
      }
      name      = "prometheus-operator-k8s-sli-lcs.rules"
      namespace = "monitoring"
    }
    spec = {
      groups = [
        {
          name = "k8s-sli-lcs.rules"
          rules = [
            {
              expr   = "istio_requests_total{reporter=\"source\",response_code!~\"5..\",destination_workload!=\"unknown\",source_workload!=\"unknown\"}"
              record = "reporter:istio_requests_total:not500_all_dw"
            },
            {
              expr   = "istio_requests_total{reporter=\"source\",destination_workload!=\"unknown\",source_workload!=\"unknown\"}"
              record = "reporter:istio_requests_total:all_dw"
            },
            {
              expr   = "istio_request_duration_milliseconds_sum{reporter=\"source\",destination_workload!=\"unknown\",source_workload!=\"unknown\"}"
              record = "reporter:istio_request_duration_milliseconds_sum:all_dw"
            },
            {
              expr   = "istio_request_duration_milliseconds_count{reporter=\"source\",destination_workload!=\"unknown\",source_workload!=\"unknown\"}"
              record = "reporter:istio_request_duration_milliseconds_count:all_dw"
            },
            {
              expr   = "rate(reporter:istio_requests_total:not500_all_dw[1d])"
              record = "reporter:istio_requests_total:not500_all_dw_1d:mean"
            },
            {
              expr   = "rate(reporter:istio_requests_total:all_dw[1d])"
              record = "reporter:istio_requests_total:all_dw_1d:mean"
            },
            {
              expr   = "rate(reporter:istio_requests_total:not500_all_dw[7d])"
              record = "reporter:istio_requests_total:not500_all_dw_7d:mean"
            },
            {
              expr   = "rate(reporter:istio_requests_total:all_dw[7d])"
              record = "reporter:istio_requests_total:all_dw_7d:mean"
            },
            {
              expr   = "rate(reporter:istio_requests_total:all_dw[2m])"
              record = "reporter:istio_requests_total:all_dw:mean"
            },
            {
              expr   = "rate(reporter:istio_request_duration_milliseconds_sum:all_dw[2m])"
              record = "reporter:istio_request_duration_milliseconds_sum:all_dw:mean"
            },
            {
              expr   = "rate(reporter:istio_request_duration_milliseconds_count:all_dw[2m])"
              record = "reporter:istio_request_duration_milliseconds_count:all_dw:mean"
            },
            {
              expr   = "sum(reporter:istio_requests_total:not500_all_dw_1d:mean)"
              record = "reporter:istio_requests_total:not500_all_dw_1d"
            },
            {
              expr   = "sum(reporter:istio_requests_total:all_dw_1d:mean)"
              record = "reporter:istio_requests_total:all_dw_1d"
            },
            {
              expr   = "sum(reporter:istio_requests_total:not500_all_dw_7d:mean)"
              record = "reporter:istio_requests_total:not500_all_dw_7d"
            },
            {
              expr   = "sum(reporter:istio_requests_total:all_dw_7d:mean)"
              record = "reporter:istio_requests_total:all_dw_7d"
            },
            {
              expr   = "sum(reporter:istio_requests_total:all_dw:mean)"
              record = "reporter:istio_requests_total:all_dw:sum"
            },
            {
              expr   = "sum(reporter:istio_request_duration_milliseconds_sum:all_dw:mean) by (destination_workload)"
              record = "reporter:istio_request_duration_milliseconds_sum:all_dw:sum"
            },
            {
              expr   = "sum(reporter:istio_request_duration_milliseconds_count:all_dw:mean) by (destination_workload)"
              record = "reporter:istio_request_duration_milliseconds_count:all_dw:sum"
            },
          ]
        },
      ]
    }
  }
}

Steps to Reproduce

  1. Create and save a plan using terraform plan -out tfplan
  2. Run terraform apply

Expected Behavior

We're trying to apply a CRD for prometheus. Prometheus adds an annotation to the CRD called "prometheus-operator-validated" after applying this resource. We are not setting any annotations in our config. 0.4.1 does not attempt to manage this annotation and an apply works successfully. With 0.5.0, the apply should succeed or a lifecycle rule to ignore change to the annotations should work.

Actual Behavior

 Error: Provider produced inconsistent result after apply
│ 
│ When applying changes to kubernetes_manifest.lcs-sli, provider "provider[\"registry.terraform.io/hashicorp/kubernetes-alpha\"]" produced an unexpected new value: .object:
│ wrong final value type: incorrect object attributes.
│ 
│ This is a bug in the provider, which should be reported in the provider's own issue tracker.

We also tried to add a lifecycle_rule as follows:

  lifecycle {
    ignore_changes = [
      manifest.metadata.annotations
    ]
  }

This resulted in the same problem. A plan shows that it wants to remove the annotations and an apply fails because the annotation gets put back on by Prometheus before the apply finishes.

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Lexmark-peachj avatar Jun 11 '21 14:06 Lexmark-peachj

Got the same result when trying to deploy a HelmRelease with version 0.5.0

dniel avatar Jun 11 '21 21:06 dniel

Have the same error when applying prometheus rules with the new release. Everything working fine when applying with v0.4.1, but when applying the same config with v0.5.0 it results in the same error as posted in this issue Error: Provider produced inconsistent result after apply... produced an unexpected new value: .object: wrong final value type: incorrect object attributes. ...

GustavJaner avatar Jun 15 '21 09:06 GustavJaner

I think I had this issue with v0.5 and

manifest = {
    apiVersion = "elbv2.k8s.aws/v1beta1"
    kind       = "TargetGroupBinding"

I was able to workaround it by adding finalizers = ["elbv2.k8s.aws/resources"] to the metadata of the manifest.

pschiffe avatar Jun 21 '21 22:06 pschiffe

This is a known issue. It happens when both the user and some cluster components both add values to the "annotations" list. We're currently looking at ways to address this.

alexsomesan avatar Jul 07 '21 15:07 alexsomesan