terraform-provider-kubernetes
terraform-provider-kubernetes copied to clipboard
kubernetes_manifest: Error: Failed to determine GroupVersionResource for manifest
When trying to deploy jetstack module as part of the aws elb module it fails as the api_group is a known after variable
Terraform Version, Provider Version and Kubernetes Version
Terraform v1.0.4
on linux_amd64
+ provider registry.terraform.io/hashicorp/aws v3.73.0
+ provider registry.terraform.io/hashicorp/kubernetes v2.7.1
+ provider registry.terraform.io/hashicorp/null v3.1.0
Affected Resource(s)
- certificate_kube_system_aws_load_balancer_serving_cert
Terraform Configuration Files
resource "kubernetes_manifest" "certificate_kube_system_aws_load_balancer_serving_cert" {
manifest = {
"apiVersion" = "${module.jetstack-certmanager.api_group}/v1"
"kind" = "Certificate"
"metadata" = {
"labels" = {
"app.kubernetes.io/name" = var.name
}
"name" = "aws-load-balancer-serving-cert"
"namespace" = var.namespace
}
"spec" = {
"dnsNames" = [
"aws-load-balancer-webhook-service.kube-system.svc",
"aws-load-balancer-webhook-service.kube-system.svc.cluster.local",
]
"issuerRef" = {
"kind" = "Issuer"
"name" = "aws-load-balancer-selfsigned-issuer"
}
"secretName" = "aws-load-balancer-webhook-tls"
}
}
}
Debug Output
Panic Output
Steps to Reproduce
-
terraform apply
-->
Expected Behavior
Kubernetes manifest to know a key variable is known after apply and to skip in plan
Actual Behavior
provider fails
Important Factoids
None
References
None
Community Note
- Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
- If you are interested in working on this issue or have submitted a pull request, please leave a comment
Same issue here!
Hi!
I as far as I can tell, the interpolation of ${module.jetstack-certmanager.api_group}
in the apiVersion
attribute is at fault here. The problem is, if that resource / module isn't already present in state (being created at the same time as this resource) that value isn't yet available at the earlier stages of a plan
operation, where it's in fact required by the K8s provider. This results in the value being interpolating as null, the provider trying to locate the "Certificate" resource in the "/v1" resource group, which is where only the cluster built-in resources are.
First question: is the value of module.jetstack-certmanager.api_group
really dynamically generated? If not, I would advise to just use plain old text for the apiVersion value.
If yes, you have to split operations into two applies. First apply creates the module.jetstack-certmanager and the second operation creates any resources that need to interpolate values in apiVersion
.
Let me know if this advice was helpful.
Apparently it also happens when no interpolation is used in the kubernetes_service
manifest. I'm getting the same Failed to determine GroupVersionResource for manifest
.
env
Terraform v1.0.10
on darwin_amd64
+ provider registry.terraform.io/gavinbunney/kubectl v1.13.1
+ provider registry.terraform.io/hashicorp/google v3.90.1
+ provider registry.terraform.io/hashicorp/helm v2.4.1
+ provider registry.terraform.io/hashicorp/http v2.1.0
+ provider registry.terraform.io/hashicorp/kubernetes v2.8.0
+ provider registry.terraform.io/hashicorp/null v3.1.0
+ provider registry.terraform.io/hashicorp/time v0.7.2
Relevant code:
resource "kubernetes_manifest" "frontend_config" {
manifest = {
apiVersion = "networking.gke.io/v1"
kind = "FrontendConfig"
metadata = {
name = "argocd-frontend-config"
namespace = "argocd"
generation = 1
}
spec = {
redirectToHttps = {
enabled = true
}
}
}
}
Output:
╷
│ Error: Failed to determine GroupVersionResource for manifest
│
│ with kubernetes_manifest.frontend_config,
│ on gke-ingress.tf line 35, in resource "kubernetes_manifest" "frontend_config":
│ 35: resource "kubernetes_manifest" "frontend_config" {
│
│ cannot select exact GV from REST mapper
╵
Facing similar issues with CRDS.
Our scenario
- install using
helm_release
crossplane. - install using
kubernetes_manifest
the aws crossplane provider - configure the aws_provider using
kubernetes_manifest
« this is failing with same error as the CRD apiVersion is not there yet.
Is this something that is resolvable in this module?
I have the same issue.
$> terraform --version
Terraform v1.0.11
on linux_amd64
+ provider registry.terraform.io/hashicorp/kubernetes v2.10.0
resource "kubernetes_manifest" "customresourcedefinition_kubegres_kubegres_reactive_tech_io" {
manifest = {
"apiVersion" = "apiextensions.k8s.io/v1"
"kind" = "CustomResourceDefinition"
"metadata" = {
"annotations" = {
"controller-gen.kubebuilder.io/version" = "v0.4.1"
}
"creationTimestamp" = null
"name" = "kubegres.kubegres.reactive-tech.io"
}
"spec" = {
"group" = "kubegres.reactive-tech.io"
"names" = {
"kind" = "Kubegres"
"listKind" = "KubegresList"
"plural" = "kubegres"
"singular" = "kubegres"
}
"scope" = "Namespaced"
"versions" = [....]
}
}
}
resource "kubernetes_manifest" "kubegres_db_postgres_postgres" {
depends_on = [
kubernetes_manifest.customresourcedefinition_kubegres_kubegres_reactive_tech_io
]
manifest = {
"apiVersion" = "kubegres.reactive-tech.io/v1"
"kind" = "Kubegres"
"metadata" = {
"name" = "postgres"
"namespace" = kubernetes_manifest.namespace_db_postgres.manifest.metadata.name
}
# ....
}
}
and this generates error:
$> terraform plan
╷
│ Error: Failed to determine GroupVersionResource for manifest
│
│ with kubernetes_manifest.kubegres_db_postgres_postgres,
│ on postgres-cluster.tf line 33, in resource "kubernetes_manifest" "kubegres_db_postgres_postgres":
│ 33: resource "kubernetes_manifest" "kubegres_db_postgres_postgres" {
│
│ no matches for kind "Kubegres" in group "kubegres.reactive-tech.io"
╵
but when I remove/comment resource "kubernetes_manifest" "kubegres_db_postgres_postgres"
resource and apply customresourcedefinition first, then add again kubegres_db_postgres_postgres
it works.
Similarly on a terraform destroy
this resource still first tries to do all kind of checks. When the resource for some reason doesn't exist, the terraform destroy
will fail.
i feel same pain...
my setup. 2 repos:
- first is preparing eks and installing CRDs on eks
- second is installing resources based on CRDs
and now i see issue:
"Error: Failed to determine GroupVersionResource for manifest"
when do terraform destroy
or terraform refresh
on second repo.
Same issue.
- Use helm_release to install CRDs
- install kubernetes_manifest using that CRDs
Error: Failed to determine GroupVersionResource for manifest
with kubernetes_manifest.
no matches for kind "xxxx" in group "XXXXXXX"
Also experiencing the same here.
Having the same issue, It would be easier to deploy all at once by just "allowing to skip" CRD validation! If we could get an option in the terraform resource to skip API validation for CRD that are not yet there, it would work like a charm!
Same issue with the following configuration :
resource "helm_release" "rabbit_cluster_operator" {
name = "rabbitmq-cluster-operator"
repository = "https://charts.bitnami.com/bitnami"
chart = "rabbitmq-cluster-operator"
}
resource "kubernetes_manifest" "documents_rabbitmq_operator" {
depends_on = [helm_release.rabbit_cluster_operator]
manifest = {
"apiVersion" = "rabbitmq.com/v1beta1"
"kind" = "RabbitmqCluster"
"metadata" = {
"name" = "rabbit"
"namespace" = "default"
}
}
}
It could be great to add an option to the existing wait
argument to wait for a named api before running creation.
The same issue with argocd app of apps (argoproj.io apiVersion):
resource "helm_release" "argocd" {
name = "argocd"
repository = "https://argoproj.github.io/argo-helm"
chart = "argo-cd"
version = "5.13.2"
create_namespace = "true"
namespace = "argocd"
lint = true
}
resource "kubernetes_secret" "argocd_secret" {
depends_on = [helm_release.argocd]
metadata {
labels = {
"argocd.argoproj.io/secret-type" = "repository"
}
name = "argocd-deployment-secret"
namespace = "argocd"
}
data = {
password = "${var.argo_cd}"
url = "https://gitlab.com/my-group/deployment.git"
username = "argocd"
}
}
resource "kubernetes_manifest" "argocd_application" {
depends_on = [kubernetes_secret.argocd_secret]
manifest = {
apiVersion = "argoproj.io/v1alpha1"
kind = "Application"
metadata = {
name = "argocd-sync"
namespace = "argocd"
}
spec = {
destination = {
namespace = "argocd"
server = "https://kubernetes.default.svc"
}
project = "default"
source = {
path = "argocd/overlays/${var.environment_name}"
repoURL = "https://gitlab.com/my-group/deployment.git"
targetRevision = "HEAD"
}
syncPolicy = {
automated = {}
}
}
}
}
│ Error: Failed to determine GroupVersionResource for manifest
│
│ with kubernetes_manifest.argocd_application,
│ on main.tf line 329, in resource "kubernetes_manifest" "argocd_application":
│ 329: resource "kubernetes_manifest" "argocd_application" {
│
│ no matches for kind "Application" in group "argoproj.io"
If resource "kubernetes_manifest" "argocd_application" is commented and I run terraform plan / apply, everything is working. Only after that I am able to terraform plan / apply resource "kubernetes_manifest" "argocd_application". If trying to plan everything at he same time, getting provided error.
So, no way to disable checking of crd existing at planning phase ?
Same issue as #1367 . Please add a 👍🏻 to that issue to prioritize the request.
abandonned issue?
Same issue here
Same issue here
try to wait { condition { type = "ContainersReady" status = "True" }
not working
So does anyone have a workaround for this issue? Thank you in advance and regards
Would love a workaround here as I just hit this as well :(
I'm also facing the same issue.
Hi all, I encountered a similar problem with ClusterIssuers CRDs of Cert-manager and fixed it with HELM provider because it doesn't evaluate CRD with KubeAPI as kubernetes_manifest
terraform resource. Here is my example
resource "helm_release" "cert_manager" {
name = "cert-manager"
repository = "https://charts.jetstack.io"
chart = "cert-manager"
namespace = var.cert_manager_namespace
create_namespace = true
version = var.cert_manager_release
dependency_update = true
values = [
yamlencode({
installCRDs = true
replicaCount = 2
})
]
}
resource "kubernetes_secret" "k8s_secret" {
depends_on = [helm_release.cert_manager]
for_each = { for secret in var.secretsmanager_secrets : secret.k8s_secret_name => secret }
metadata {
name = each.key
namespace = var.cert_manager_namespace
}
data = {
(each.value.secret_key) = jsondecode(data.aws_secretsmanager_secret_version.current[each.value.name].secret_string)[each.value.secret_key]
}
}
resource "helm_release" "cluster_issuers" {
depends_on = [helm_release.cert_manager, kubernetes_secret.k8s_secret]
name = "cluster-issuers"
repository = "https://bedag.github.io/helm-charts/"
chart = "raw"
version = "2.0.0"
namespace = var.cert_manager_namespace
values = [
yamlencode({
resources = var.cert_manager_manifests_cluster_issuers
})
]
}
:white_check_mark:
In case anyone gets here and haven't yet figured it out... I was facing the same issue as everyone else in this thread...
The way I solved this was to convert my YAML-encoded string into TF syntax, allowing for the dynamic values to enforce a dependency between resources and forcing the child resource to wait for the parent.
Here's the concrete example for further understanding.
I converted this resource:
resource "kubernetes_manifest" "this" {
manifest = yamldecode(<<-EOF
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: pgpassword
namespace: default
spec:
data:
- remoteRef:
key: ${azurerm_key_vault_secret.this.name} # <- it fails because of this not being static value
secretKey: PGPASSWORD
refreshInterval: 1h
secretStoreRef:
kind: ClusterSecretStore
name: azure-keyvault
EOF
)
}
To this one:
resource "kubernetes_manifest" "this" {
manifest = {
apiVersion = "external-secrets.io/v1beta1"
kind = "ExternalSecret"
metadata = {
name = "pgpassword"
namespace = "default"
}
spec = {
data = [
{
remoteRef = {
key = azurerm_key_vault_secret.this.name # but this succeeds cause this is expected TF syntax
}
secretKey = "PGPASSWORD"
}
]
refreshInterval = "1h"
secretStoreRef = {
kind = "ClusterSecretStore"
name = "azure-keyvault"
}
}
}
}
And this solved it for me. :rocket: