terraform-provider-kubernetes-alpha icon indicating copy to clipboard operation
terraform-provider-kubernetes-alpha copied to clipboard

v0.3.1 cannot find PrometheusRules in monitoring.coreos.com

Open kradalby opened this issue 3 years ago • 20 comments

Terraform, Provider, Kubernetes versions

Terraform version: v0.14.8
Provider version: v0.3.1
Kubernetes version:v1.20.4+k3s1

Affected Resource(s)

  • kubernetes_manifest

Terraform Configuration Files

resource "kubernetes_manifest" "test" {
  provider = kubernetes-alpha

  manifest = {
    "apiVersion" = "monitoring.coreos.com/v1"
    "kind"       = "PrometheusRules"
    "metadata" = {
      "labels" = {
        "app" = "mixin"
      }
      "name"      = "mixin-alerts"
      "namespace" = "monitoring"
    }
    "spec" = yamldecode(file("${path.module}/monitoring.d/mixins/alerts.yaml"))
  }
}

Debug Output

https://gist.github.com/kradalby/7275ce408959a89357ffbcca4d04f2a8

Panic Output

Error: Failed to determine GroupVersionResource for manifest

  on monitoring.tf line 2, in resource "kubernetes_manifest" "test":
   2: resource "kubernetes_manifest" "test" {

no matches for kind "PrometheusRules" in group "monitoring.coreos.com"

2021-03-12T09:11:21.120Z [WARN]  plugin.stdio: received EOF, stopping recv loop: err="rpc error: code = Unavailable desc = transport is closing"
2021-03-12T09:11:21.123Z [DEBUG] plugin: plugin process exited: path=.terraform/providers/registry.terraform.io/hashicorp/kubernetes-alpha/0.3.1/darwin_amd64/terraform-provider-kubernetes-alpha_v0.3.1_x5 pid=63234
2021-03-12T09:11:21.123Z [DEBUG] plugin: plugin exited

Steps to Reproduce

  1. provider.tf has a >= 0.3.1 version constraint
  2. terraform init -upgrade
  3. terraform plan

Also verified by doing a clean setup with a terraform init and then plan.

Expected Behavior

What should have happened? A plan should be successfully made.

Actual Behavior

What actually happened?

Error indicating that a CRD isnt present. Work in previous v0.2.1

Important Factoids

I use k3s as my distribution.

References

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

kradalby avatar Mar 12 '21 09:03 kradalby

Hi @kradalby

Is the PrometheusRules.monitoring.coreos.com/v1 CRD installed on the cluster before you run terraform plan on this kubernetes_manifest resource?

alexsomesan avatar Mar 12 '21 11:03 alexsomesan

ah sorry, forgot that detail, in this case yes it is.

11:45:30 ❯ kubectl api-resources | ag monitoring
alertmanagerconfigs                            monitoring.coreos.com/v1alpha1         true         AlertmanagerConfig
alertmanagers                                  monitoring.coreos.com/v1               true         Alertmanager
podmonitors                                    monitoring.coreos.com/v1               true         PodMonitor
probes                                         monitoring.coreos.com/v1               true         Probe
prometheuses                                   monitoring.coreos.com/v1               true         Prometheus
prometheusrules                                monitoring.coreos.com/v1               true         PrometheusRule
servicemonitors                                monitoring.coreos.com/v1               true         ServiceMonitor
thanosrulers                                   monitoring.coreos.com/v1               true         ThanosRuler

In the future I would hope that I could set a dependency order and install the helm chart with Prometheus in the same plan. But that is not the issue right now.

kradalby avatar Mar 12 '21 11:03 kradalby

It did apply correctly under 0.2.x when the CRD was installed.

kradalby avatar Mar 12 '21 17:03 kradalby

@kradalby I'll try to reproduce this ASAP and get back to you.

alexsomesan avatar Mar 15 '21 08:03 alexsomesan

The very same happening here, but I comment just to say I am using traefik CRDs (already applied to the cluster).

Pinning version 0.2.x works but 0.3.x not

Using k3s also here

bennesp avatar Mar 15 '21 13:03 bennesp

@kradalby It looks to me like the name of the kind is PrometheusRule not PrometheusRules. This is the general rule for all kinds defined by CRDs.

It seems to work if you change that:

~/test-alpha-168 » terraform plan                                                                                                                                  alex@Alexs-MBP

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # kubernetes_manifest.test will be created
  + resource "kubernetes_manifest" "test" {
      + manifest = {
          + apiVersion = "monitoring.coreos.com/v1"
          + kind       = "PrometheusRule"
          + metadata   = {
              + labels    = {
                  + app = "mixin"
                }
              + name      = "mixin-alerts"
              + namespace = "default"
            }
          + spec       = {}
        }
      + object   = {
          + apiVersion = "monitoring.coreos.com/v1"
          + kind       = "PrometheusRule"
          + metadata   = {
              + annotations                = (known after apply)
              + clusterName                = (known after apply)
              + creationTimestamp          = (known after apply)
              + deletionGracePeriodSeconds = (known after apply)
              + deletionTimestamp          = (known after apply)
              + finalizers                 = (known after apply)
              + generateName               = (known after apply)
              + generation                 = (known after apply)
              + labels                     = {
                  + "app" = "mixin"
                }
              + managedFields              = (known after apply)
              + name                       = "mixin-alerts"
              + namespace                  = "default"
              + ownerReferences            = (known after apply)
              + resourceVersion            = (known after apply)
              + selfLink                   = (known after apply)
              + uid                        = (known after apply)
            }
          + spec       = {
              + groups = (known after apply)
            }
        }
    }

Plan: 1 to add, 0 to change, 0 to destroy.

------------------------------------------------------------------------

alexsomesan avatar Mar 17 '21 19:03 alexsomesan

Ok, so I have done some more testing, you are right, I had some typo for the plurals somewhere but In addition I see the following, which is more instability than "not working":

Occasionally I get that it cannot find the CRD, this seem to occur more often when I try to use multiple CRDs at the same time:

Error: Failed to determine GroupVersionResource for manifest

  on monitoring.tf line 260, in resource "kubernetes_manifest" "servicemonitor-metrics-enabled":
 260: resource "kubernetes_manifest" "servicemonitor-metrics-enabled" {

no matches for kind "ServiceMonitor" in group "monitoring.coreos.com"


Error: Failed to determine GroupVersionResource for manifest

  on monitoring.tf line 297, in resource "kubernetes_manifest" "mixin-alerts":
 297: resource "kubernetes_manifest" "mixin-alerts" {

no matches for kind "PrometheusRule" in group "monitoring.coreos.com"


Error: Failed to determine GroupVersionResource for manifest

  on monitoring.tf line 314, in resource "kubernetes_manifest" "mixin-rules":
 314: resource "kubernetes_manifest" "mixin-rules" {

no matches for kind "PrometheusRule" in group "monitoring.coreos.com"

The other issue is that I get panics on most of the occasions:

panic: inconsistent list element types (cty.Object(map[string]cty.Type{"interval":cty.String, "name":cty.String, "partial_response_strategy":cty.String, "rules":cty.List(cty.Object(map[string]cty.Type{"alert":cty.String, "annotations":cty.Map(cty.String), "expr":cty.String, "for":cty.String, "labels":cty.Map(cty.String), "record":cty.String}))}) then cty.Object(map[string]cty.Type{"interval":cty.String, "name":cty.String, "partial_response_strategy":cty.String, "rules":cty.List(cty.Object(map[string]cty.Type{"alert":cty.String, "annotations":cty.Map(cty.String), "expr":cty.DynamicPseudoType, "for":cty.String, "labels":cty.Map(cty.String), "record":cty.String}))}))

goroutine 480 [running]:
github.com/zclconf/go-cty/cty.ListVal(0xc000c036c0, 0x7, 0x7, 0xc00201c500, 0x4, 0x4, 0x37f2488)
        github.com/zclconf/[email protected]/cty/value_init.go:166 +0x57e
github.com/zclconf/go-cty/cty/msgpack.unmarshalList(0xc0025e0570, 0x37f2488, 0xc0021589e0, 0xc00201c500, 0x4, 0x4, 0xc0023c59b0, 0xc002132000, 0xc00008cd80, 0x0, ...)
        github.com/zclconf/[email protected]/cty/msgpack/unmarshal.go:161 +0x3ec
github.com/zclconf/go-cty/cty/msgpack.unmarshal(0xc0025e0570, 0x37f2418, 0xc0021589f0, 0xc00201c500, 0x3, 0x4, 0x2, 0x4, 0x2f9ac60, 0x487e440, ...)
        github.com/zclconf/[email protected]/cty/msgpack/unmarshal.go:52 +0x594
github.com/zclconf/go-cty/cty/msgpack.unmarshalObject(0xc0025e0570, 0xc0021b61e0, 0xc00201c500, 0x3, 0x4, 0xc0025df978, 0x100ee4c, 0x8d91558, 0xbf, 0x0, ...)
        github.com/zclconf/[email protected]/cty/msgpack/unmarshal.go:297 +0x4c8
github.com/zclconf/go-cty/cty/msgpack.unmarshal(0xc0025e0570, 0x37f2488, 0xc002158a10, 0xc001964780, 0x2, 0x2, 0x37f2488, 0xc002158be0, 0x2fc4160, 0xc0021b6270, ...)
        github.com/zclconf/[email protected]/cty/msgpack/unmarshal.go:60 +0x8dc
github.com/zclconf/go-cty/cty/msgpack.unmarshalObject(0xc0025e0570, 0xc0021b6210, 0xc001964780, 0x2, 0x2, 0x3796ac0, 0xc00009e070, 0xc0025dfe18, 0x137ec87, 0xc0021182c0, ...)
        github.com/zclconf/[email protected]/cty/msgpack/unmarshal.go:297 +0x4c8
github.com/zclconf/go-cty/cty/msgpack.unmarshal(0xc0025e0570, 0x37f2488, 0xc002158a30, 0xc0011614b0, 0x1, 0x1, 0x0, 0x6, 0x2fc4101, 0x6, ...)
        github.com/zclconf/[email protected]/cty/msgpack/unmarshal.go:60 +0x8dc
github.com/zclconf/go-cty/cty/msgpack.unmarshalDynamic(0xc0025e0570, 0xc0011614b0, 0x1, 0x1, 0x203000, 0x120, 0x6, 0xc002132000, 0xc00008cd80, 0x0)
        github.com/zclconf/[email protected]/cty/msgpack/unmarshal.go:333 +0x998
github.com/zclconf/go-cty/cty/msgpack.unmarshal(0xc0025e0570, 0x37f23e0, 0x487e948, 0xc0011614b0, 0x1, 0x1, 0x37f2488, 0xc0020e7ed0, 0x2fc4160, 0xc00213d530, ...)
        github.com/zclconf/[email protected]/cty/msgpack/unmarshal.go:37 +0xb1d
github.com/zclconf/go-cty/cty/msgpack.unmarshalObject(0xc0025e0570, 0xc000d40270, 0xc0011614b0, 0x1, 0x1, 0x8f4e778, 0xc000d402a0, 0x37b93c8, 0xc000d402a0, 0xa52276b000000001, ...)
        github.com/zclconf/[email protected]/cty/msgpack/unmarshal.go:297 +0x4c8
github.com/zclconf/go-cty/cty/msgpack.unmarshal(0xc0025e0570, 0x37f2488, 0xc0011614a0, 0x0, 0x0, 0x0, 0x37f23e0, 0x487e948, 0x37f23e0, 0x487e948, ...)
        github.com/zclconf/[email protected]/cty/msgpack/unmarshal.go:60 +0x8dc
github.com/zclconf/go-cty/cty/msgpack.Unmarshal(0xc0020b4000, 0x8212, 0xa000, 0x37f2488, 0xc0011614a0, 0x1, 0x0, 0x0, 0x0, 0x0, ...)
        github.com/zclconf/[email protected]/cty/msgpack/unmarshal.go:22 +0x10b
github.com/hashicorp/terraform/plugin.(*GRPCProvider).PlanResourceChange(0xc00066a700, 0xc00005d398, 0x13, 0x37f2488, 0xc0012e1ce0, 0x0, 0x0, 0x37f2488, 0xc00114d530, 0x2fc4160, ...)
        github.com/hashicorp/terraform/plugin/grpc_provider.go:428 +0xa2b
github.com/hashicorp/terraform/terraform.(*EvalDiff).Eval(0xc0025e1ac8, 0x3827d20, 0xc0011bc0d0, 0x0, 0x0, 0x0, 0x0)
        github.com/hashicorp/terraform/terraform/eval_diff.go:250 +0xfb7
github.com/hashicorp/terraform/terraform.(*NodePlannableResourceInstance).managedResourceExecute(0xc000a29530, 0x3827d20, 0xc0011bc0d0, 0x34b36c8, 0xc000072800)
        github.com/hashicorp/terraform/terraform/node_resource_plan_instance.go:207 +0x58d
github.com/hashicorp/terraform/terraform.(*NodePlannableResourceInstance).Execute(0xc000a29530, 0x3827d20, 0xc0011bc0d0, 0xc00200bd02, 0x17adab2, 0x100bcdf)
        github.com/hashicorp/terraform/terraform/node_resource_plan_instance.go:39 +0xb3
github.com/hashicorp/terraform/terraform.(*ContextGraphWalker).Execute(0xc000d7e9c0, 0x3827d20, 0xc0011bc0d0, 0x897f3a8, 0xc000a29530, 0x0, 0x0, 0x0)
        github.com/hashicorp/terraform/terraform/graph_walk_context.go:127 +0xbf
github.com/hashicorp/terraform/terraform.(*Graph).walk.func1(0x33107a0, 0xc000a29530, 0x0, 0x0, 0x0)
        github.com/hashicorp/terraform/terraform/graph.go:59 +0x973
github.com/hashicorp/terraform/dag.(*Walker).walkVertex(0xc0010e2600, 0x33107a0, 0xc000a29530, 0xc00154e880)
        github.com/hashicorp/terraform/dag/walk.go:387 +0x322
created by github.com/hashicorp/terraform/dag.(*Walker).Update
        github.com/hashicorp/terraform/dag/walk.go:309 +0x1246



!!!!!!!!!!!!!!!!!!!!!!!!!!! TERRAFORM CRASH !!!!!!!!!!!!!!!!!!!!!!!!!!!!

Terraform crashed! This is always indicative of a bug within Terraform.
A crash log has been placed at "crash.log" relative to your current
working directory. It would be immensely helpful if you could please
report the crash with Terraform[1] so that we can fix this.

When reporting bugs, please include your terraform version. That
information is available on the first line of crash.log. You can also
get it by running 'terraform --version' on the command line.

SECURITY WARNING: the "crash.log" file that was created may contain
sensitive information that must be redacted before it is safe to share
on the issue tracker.

[1]: https://github.com/hashicorp/terraform/issues

!!!!!!!!!!!!!!!!!!!!!!!!!!! TERRAFORM CRASH !!!!!!!!!!!!!!!!!!!!!!!!!!!!

Crash.log: https://gist.github.com/kradalby/7b7cdd5c6cf3f15d90a3c22d38007cd3 (Very long, sorry but I can only reproduce when there is a bit of load)

But I also get it in occasionally, so it looks like it just struggles somewhere with more amount of objects 🤔 . Wonder if it is the Kubernetes or the Terraform side.

kradalby avatar Mar 18 '21 13:03 kradalby

Actually, the PrometheusRule that fails with the nil panic is actually a bit larger than the other ones. and its the last one that consistently fails.

The file that fails is 52 kilobytes, while the next largest one that does not fail is 13 kilobytes.

kradalby avatar Mar 18 '21 17:03 kradalby

@kradalby I'd be interested to test with those large resources if you can share them (unless they're sensitive / confidential).

alexsomesan avatar Mar 23 '21 15:03 alexsomesan

It seems we have same issue with v0.3.2

Terraform v0.14.8
+ provider registry.terraform.io/digitalocean/digitalocean v2.6.0
+ provider registry.terraform.io/gavinbunney/kubectl v1.10.0
+ provider registry.terraform.io/gitlabhq/gitlab v3.5.0
+ provider registry.terraform.io/hashicorp/helm v2.0.3
+ provider registry.terraform.io/hashicorp/http v2.1.0
+ provider registry.terraform.io/hashicorp/kubernetes v2.0.2
+ provider registry.terraform.io/hashicorp/kubernetes-alpha v0.3.2
+ provider registry.terraform.io/hashicorp/random v3.1.0
+ provider registry.terraform.io/hetznercloud/hcloud v1.25.2
resource "helm_release" "cert_manager" {
  chart = "cert-manager"
  name = "cert-manager"
  repository = "https://charts.jetstack.io"
  version = "1.2.0"
  namespace = "cert-manager"

  create_namespace = true
  wait = true

  set {
    name = "installCRDs"
    value = true
  }
}

resource "kubernetes_manifest" "cert_manager_cluster_issuer" {
  depends_on = [helm_release.cert_manager]
  provider = kubernetes-alpha
  manifest = {
    apiVersion = "cert-manager.io/v1"
    kind = "ClusterIssuer"
    metadata = {
      name = "letsencrypt"
    }
    spec = {
      acme = {
        email = "[email protected]"
        server = "https://acme-v02.api.letsencrypt.org/directory"
        preferredChain = "ISRG Root X1"
        privateKeySecretRef = {
          name = "letsencrypt"
        }
        solvers = [{
          http01 = {
            ingress = {
              class = kubernetes_manifest.ingress_class.manifest.metadata.name
            }
          }
        }]
      }
    }
  }
}

Result:

Error: Failed to determine GroupVersionResource for manifest

  on cert_manager.tf line 17, in resource "kubernetes_manifest" "cert_manager_cluster_issuer":
  17: resource "kubernetes_manifest" "cert_manager_cluster_issuer" {

no matches for kind "ClusterIssuer" in group "cert-manager.io"

a0s avatar Mar 23 '21 22:03 a0s

@a0s Jinx you owe me a soda. I was writing the same issue as you (but like the OP, I'm using the Prometheus Helm chart) when your comment showed up. However, I think you and I may have a slightly different issue, namely that we are trying to load the CRDs and the manifest in the same Terraform run? Here is my version:

resource "helm_release" "monitoring" {
  name       = "kube-prometheus-stack"
  repository = "https://prometheus-community.github.io/helm-charts"
  chart      = "kube-prometheus-stack"
  version    = "14.2.0"
}

resource "kubernetes_manifest" "traefik_metrics" {
  provider = kubernetes-alpha
  depends_on = [helm_release.monitoring]
  manifest = {
    apiVersion = "monitoring.coreos.com/v1"
    kind       = "ServiceMonitor"
    metadata = {
      name      = "traefik-metrics"
      namespace = "kube-system"
      labels = {
        release = "kube-prometheus-stack"
      }
    }
    spec = {
      selector = {
        matchLabels = {
          app = "traefik"
        }
      }
      endpoints = [
        { port = "metrics" },
      ]
    }
  }
}

This fails in the planning stage, much like yours:

Error: Failed to determine GroupVersionResource for manifest

  on traefik.tf line 52, in resource "kubernetes_manifest" "traefik_metrics":
  52: resource "kubernetes_manifest" "traefik_metrics" {

no matches for kind "ServiceMonitor" in group "monitoring.coreos.com"

Are you supposed to be able to have a Helm chart add CRDs that you then use in the same run?

bittrance avatar Mar 23 '21 23:03 bittrance

@alexsomesan yep no problem, I created a gist: https://gist.github.com/kradalby/d2ab437ac20c96f4940553330520696c

They are generated from the https://monitoring.mixins.dev project.

kradalby avatar Mar 24 '21 20:03 kradalby

@bittrance I ended up with cert-manager by converting the latest cert-manager.yaml to hcl like this

tfk8s --strip --file cert-manager_v1.2.0.yaml --output cert-manager_v1.2.0.tf --provider "kubernetes-alpha"

then i fixed small errors and replaced several kubernetes_manifest to native kubernetes_role_binding. I committed the result (as static .tf) into my repo. I am going to not use helm_release for CRD installation in the future.

Are you sure there is a CRDs in the kube-prometheus-stack helm? I found this https://github.com/prometheus-community/helm-charts/issues/717. So i am going to convert https://github.com/prometheus-operator/prometheus-operator/blob/master/bundle.yaml to hcl...

a0s avatar Mar 26 '21 09:03 a0s

@a0s Well, I just had Terraform Cloud apply https://github.com/bittrance/krony-cloud/pull/2/files (the Helm part of the PR linked above) in my toy production cluster. Before applying:

$ kubectl --context=krony-prod-kubernetes get crds
No resources found in default namespace.

Terraform plan:

  # helm_release.monitoring will be created
  + resource "helm_release" "monitoring" {
      + atomic                     = false
      + chart                      = "kube-prometheus-stack"
      + cleanup_on_fail            = false
      + create_namespace           = false
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = false
      + id                         = (known after apply)
      + lint                       = false
      + max_history                = 0
      + metadata                   = (known after apply)
      + name                       = "kube-prometheus-stack"
      + namespace                  = "default"
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://prometheus-community.github.io/helm-charts"
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + verify                     = false
      + version                    = "14.2.0"
      + wait                       = true

After applying:

$ kubectl --context=krony-prod-kubernetes get crds
NAME                                        CREATED AT
alertmanagerconfigs.monitoring.coreos.com   2021-03-26T15:53:31Z
alertmanagers.monitoring.coreos.com         2021-03-26T15:53:31Z
podmonitors.monitoring.coreos.com           2021-03-26T15:53:32Z
probes.monitoring.coreos.com                2021-03-26T15:53:32Z
prometheuses.monitoring.coreos.com          2021-03-26T15:53:33Z
prometheusrules.monitoring.coreos.com       2021-03-26T15:53:33Z
servicemonitors.monitoring.coreos.com       2021-03-26T15:53:34Z
thanosrulers.monitoring.coreos.com          2021-03-26T15:53:34Z

So it certainly looks like

repository = "https://prometheus-community.github.io/helm-charts"
chart      = "kube-prometheus-stack"
version    = "14.2.0"

installs CRDs.

bittrance avatar Mar 26 '21 16:03 bittrance

@bittrance I ended up with cert-manager by converting the latest cert-manager.yaml to hcl like this

tfk8s --strip --file cert-manager_v1.2.0.yaml --output cert-manager_v1.2.0.tf --provider "kubernetes-alpha"

then i fixed small errors and replaced several kubernetes_manifest to native kubernetes_role_binding. I committed the result (as static .tf) into my repo. I am going to not use helm_release for CRD installation in the future.

Are you sure there is a CRDs in the kube-prometheus-stack helm? I found this prometheus-community/helm-charts#717. So i am going to convert https://github.com/prometheus-operator/prometheus-operator/blob/master/bundle.yaml to hcl...

@a0s Where can I find the repository you are referring to? I am trying to solve the same issue where it says it cannot find the kind Issuer.

deorder avatar Apr 23 '21 09:04 deorder

@bittrance I ended up with cert-manager by converting the latest cert-manager.yaml to hcl like this

tfk8s --strip --file cert-manager_v1.2.0.yaml --output cert-manager_v1.2.0.tf --provider "kubernetes-alpha"

then i fixed small errors and replaced several kubernetes_manifest to native kubernetes_role_binding. I committed the result (as static .tf) into my repo. I am going to not use helm_release for CRD installation in the future. Are you sure there is a CRDs in the kube-prometheus-stack helm? I found this prometheus-community/helm-charts#717. So i am going to convert https://github.com/prometheus-operator/prometheus-operator/blob/master/bundle.yaml to hcl...

@a0s Where can I find the repository you are referring to? I am trying to solve the same issue where it says it cannot find the kind Issuer.

Copied it here https://github.com/a0s/terraform-cert-manager

a0s avatar Apr 28 '21 10:04 a0s

@bittrance I ended up with cert-manager by converting the latest cert-manager.yaml to hcl like this

tfk8s --strip --file cert-manager_v1.2.0.yaml --output cert-manager_v1.2.0.tf --provider "kubernetes-alpha"

then i fixed small errors and replaced several kubernetes_manifest to native kubernetes_role_binding. I committed the result (as static .tf) into my repo. I am going to not use helm_release for CRD installation in the future. Are you sure there is a CRDs in the kube-prometheus-stack helm? I found this prometheus-community/helm-charts#717. So i am going to convert https://github.com/prometheus-operator/prometheus-operator/blob/master/bundle.yaml to hcl...

@a0s Where can I find the repository you are referring to? I am trying to solve the same issue where it says it cannot find the kind Issuer.

Copied it here https://github.com/a0s/terraform-cert-manager

Thanks, and how do you create for example an Issuer with Terraform using your module?

deorder avatar Apr 29 '21 04:04 deorder

@deorder

module "cert_manager" {
  source = "./cert-manager"

  providers = {
    kubernetes = kubernetes
    kubernetes-alpha = kubernetes-alpha
  }
}


resource "kubectl_manifest" "cert_manager_cluster_issuer" {
  depends_on = [module.cert_manager]
  yaml_body = <<YAML
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt
spec:
  acme:
    email: my@email
    server: https://acme-v02.api.letsencrypt.org/directory
    preferredChain: "ISRG Root X1"
    privateKeySecretRef:
      name: letsencrypt
    solvers:
      - http01:
          ingress:
            class: nginx
YAML
}

a0s avatar Apr 29 '21 10:04 a0s

@deorder

module "cert_manager" {
  source = "./cert-manager"

  providers = {
    kubernetes = kubernetes
    kubernetes-alpha = kubernetes-alpha
  }
}


resource "kubectl_manifest" "cert_manager_cluster_issuer" {
  depends_on = [module.cert_manager]
  yaml_body = <<YAML
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt
spec:
  acme:
    email: my@email
    server: https://acme-v02.api.letsencrypt.org/directory
    preferredChain: "ISRG Root X1"
    privateKeySecretRef:
      name: letsencrypt
    solvers:
      - http01:
          ingress:
            class: nginx
YAML
}

Ah thanks. So it cannot be done using Terraform HCL syntax from the Kubernetes Alpha provider. Tried to get it working like that for days.

deorder avatar Apr 29 '21 13:04 deorder

Ah thanks. So it cannot be done using Terraform HCL syntax from the Kubernetes Alpha provider. Tried to get it working like that for days.

IMHO, there is not difference between "kubectl_manifest" and "kubernetes alpha" during cert_manager_cluster_issuer creation. I don't remember why "kubectl_manifest" was selected

a0s avatar Apr 29 '21 14:04 a0s