terraform-provider-kubernetes icon indicating copy to clipboard operation
terraform-provider-kubernetes copied to clipboard

Trying to re-apply kubernetes_manifest for OLM Subscription produces Error: Provider produced inconsistent results after apply

Open kyschouv opened this issue 3 years ago • 4 comments

Terraform Version, Provider Version and Kubernetes Version

Terraform version: 1.0.3
Kubernetes provider version: 2.4.1
Kubernetes version: 1.21.2

Affected Resource(s)

kubernetes_manifest

Terraform Configuration Files

# Operator Lifecycle Manager is installed on the cluster before running this. Via:
# kubectl apply -f "https://github.com/operator-framework/operator-lifecycle-manager/releases/download/v0.18.3/crds.yaml"
# kubectl apply -f "https://github.com/operator-framework/operator-lifecycle-manager/releases/download/v0.18.3/olm.yaml"
resource "kubernetes_manifest" "cert_manager" {
  manifest = {
    "apiVersion" = "operators.coreos.com/v1alpha1"
    "kind" = "Subscription"

    "metadata" = {
      "name" = "cert-manager"
      "namespace" = "operators"
    }

    spec = {
      "channel" = "stable"
      "name" = "cert-manager"
      "source" = "operatorhubio-catalog"
      "sourceNamespace" = "olm"
    }
  }
}

Steps to Reproduce

  1. Create a Kubernetes cluster
  2. Install OLM version v0.18.3 on the cluster (per the above instructions)
  3. Try to install the above kubernetes_manifest on the cluster with Terraform multiple times

Expected Behavior

The kubernetes_manifest is installed/updated.

Actual Behavior

Terraform plan states it will be updating the resource with the following:

Terraform will perform the following actions:

  # kubernetes_manifest.cert_manager will be updated in-place
  ~ resource "kubernetes_manifest" "cert_manager" {
      ~ object   = {
          ~ metadata   = {
              - labels    = {
                  - operators.coreos.com/cert-manager.operators = ""
                } -> null
                # (2 unchanged elements hidden)
            }
            # (3 unchanged elements hidden)
        }
        # (1 unchanged attribute hidden)
    }

Plan: 0 to add, 1 to change, 0 to destroy.

Then outputs an error during apply:

╷
│ Error: Provider produced inconsistent result after apply
│
│ When applying changes to kubernetes_manifest.cert_manager, provider
│ "provider[\"registry.terraform.io/hashicorp/kubernetes\"]" produced an
│ unexpected new value: .object: wrong final value type: incorrect object
│ attributes.
│
│ This is a bug in the provider, which should be reported in the provider's
│ own issue tracker.
╵

Important Factoids

Running on Azure AKS, but nothing special otherwise.

kyschouv avatar Aug 04 '21 06:08 kyschouv

Hi, this is an known issue and we're evaluating solutions for it.

In the meantime, you can work around it by including the new labels / annotations that were added by the cluster API after the first apply to your manifest configuration.

In your example, this would mean adding the following to your manifest metadata:

labels   = {
   operators.coreos.com/cert-manager.operators = ""
}

Let me know if this works for you. As I mentioned, we're looking for a generic solution to this.

alexsomesan avatar Aug 04 '21 08:08 alexsomesan

Hi there, i can confirm that i have the same issue but just for annotations. I can also confirm that the workaround above does work.

It would be nice if he have some kind of solution to ignore certain fields and see the actual differences for debug purposes. In our case the annotations were added by prometheus.

    prometheus-operator-validated: 'true'

dl-mai avatar Aug 06 '21 07:08 dl-mai

If any webhook intercepts the resource request and sets default fields or finalizers (very common) then it sounds like Terraform won't work currently.

lkysow avatar Aug 12 '21 16:08 lkysow

Marking this issue as stale due to inactivity. If this issue receives no comments in the next 30 days it will automatically be closed. If this issue was automatically closed and you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. This helps our maintainers find and focus on the active issues. Maintainers may also remove the stale label at their discretion. Thank you!

github-actions[bot] avatar Aug 13 '22 00:08 github-actions[bot]

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

github-actions[bot] avatar Oct 18 '22 02:10 github-actions[bot]