terraform-provider-kubernetes
terraform-provider-kubernetes copied to clipboard
kubernetes_manifest: 'status' attribute key is not allowed in manifest configuration
Terraform Version, Provider Version and Kubernetes Version
Terraform version: 1.0.7
Kubernetes provider version: 2.5.0
Kubernetes version: 1.21.2
Affected Resource(s)
- kubernetes_manifest
Terraform Configuration Files
It's a big HCL therefore, better to download from https://raw.githubusercontent.com/pixie-labs/pixie/main/k8s/operator/crd/base/px.dev_viziers.yaml and echo 'yamldecode(file("px.dev_viziers.yaml"))' | terraform console
resource "kubernetes_manifest" "newrelic-crd-viziers" {
manifest = [content yamldecode above]
}
Debug Output
Panic Output
Steps to Reproduce
-
terraform apply
Expected Behavior
Should apply cleanly as if you kubectl apply it works.
Actual Behavior
Error:
│ Error: Forbidden attribute key in "manifest" value
│
│ with kubernetes_manifest.newrelic-crd-viziers,
│ on helm_newrelic.tf line 94, in resource "kubernetes_manifest" "newrelic-crd-viziers":
│ 94: resource "kubernetes_manifest" "newrelic-crd-viziers" {
│
│ 'status' attribute key is not allowed in manifest configuration
Important Factoids
References
- https://github.com/hashicorp/terraform-provider-kubernetes-alpha/issues/164
- https://github.com/hashicorp/terraform-provider-kubernetes-alpha/issues/158
- https://github.com/hashicorp/terraform-provider-kubernetes-alpha/issues/246
Community Note
- Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
- If you are interested in working on this issue or have submitted a pull request, please leave a comment
Thanks for opening this @trunet – This is actually by design. Terraform has no responsibility for setting the status of resources, and we haven't seen any use-cases where a user would need to set a status by hand. You can simply remove the status
field from this manifest, as it is unnecessary here.
I want to bump this. I know this is by design, but I think it is an issue because it seriously limits the use of kubernetes_manifest for installing CRDs. I actually opened an SO about this and discovered this issue: https://stackoverflow.com/questions/69180684/how-do-i-apply-a-crd-from-github-to-a-cluster-with-terraform/69527736#69527736
@jrhouston The problem is official cruds are published by providers and its most common to install them with kubectl directly like this: kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/application/master/deploy/kube-app-manager-aio.yaml
. For this one crud this is in the official documentation: https://cloud.google.com/solutions/using-gke-applications-page-cloud-console#preparing_gke
It seems MANY official CRDs set the status field. Your asking users to copy down and manually modify an official CRD instead of being able to install it from an official source.
Maybe this warrants a new resource or something. It feels like I should be able to easily install a CRD from an official source with terraform like I can with kubectl apply -f ...
. As a user I'm always going to just shell out and call kubectl because it is SOO much simpler and more maintainable then keeping a local copy.
I want to bump this. I know this is by design, but I think it is an issue because it seriously limits the use of kubernetes_manifest for installing CRDs. I actually opened an SO about this and discovered this issue: https://stackoverflow.com/questions/69180684/how-do-i-apply-a-crd-from-github-to-a-cluster-with-terraform/69527736#69527736
@jrhouston The problem is official cruds are published by providers and its most common to install them with kubectl directly like this:
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/application/master/deploy/kube-app-manager-aio.yaml
. For this one crud this is in the official documentation: https://cloud.google.com/solutions/using-gke-applications-page-cloud-console#preparing_gkeIt seems MANY official CRDs set the status field. Your asking users to copy down and manually modify an official CRD instead of being able to install it from an official source.
Maybe this warrants a new resource or something. It feels like I should be able to easily install a CRD from an official source with terraform like I can with
kubectl apply -f ...
. As a user I'm always going to just shell out and call kubectl because it is SOO much simpler and more maintainable then keeping a local copy.
I have the same issue, we can't install CRUDs that include a status field
take for instance the Calico CNI install, which per AWS docs (https://docs.aws.amazon.com/eks/latest/userguide/calico.html) we should use this:
https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/master/config/master/calico-operator.yaml
we're saying the right thing to do is remove status?
just a follow up here, I removed the status check here using a local build of the provider and things get hung up later on with
AttributeName("status"): [AttributeName("status")] failed to morph object
element into object element: AttributeName("status"): type is nil
the anti-status assumption seems to run deep.
Hi everyone , I've made a dirty workaround
locals {
splited_yaml_map = { for file_path in fileset(path.module, "crds/${var.crd_version}/*.yaml") : file_path => yamlencode(
{ for root_key, root_values in yamldecode(file("${path.module}/${file_path}")) : root_key => root_values if root_key != "status" }
) }
}
resource "kubernetes_manifest" "crd" {
for_each = local.splited_yaml_map
manifest = yamldecode(each.value)
}
It hasn't properly tested yet but I hope it will help somebody.
@jrhouston the design appears not to cover all the use cases. Could you please change it? Because it's quite painful to remove the status field from dozens of crds.
@mvoitko As a workaround, we are using Kubectl Provider. I think it is a bad idea to edit any official cdrs, however this solution add one more terraform provider. (https://registry.terraform.io/providers/gavinbunney/kubectl/latest/docs)
resource "kubectl_manifest" "my_crds" {
yaml_body = file("${path.root}/my_crds.yaml")
}
@mvoitko First of all: Slava Ukraini! 💙 💛
To your observation, which design are you referring to? If you are converting your YAML manifests with our recommended tool (https://github.com/jrhouston/tfk8s), then it has a -s
flag to enable stripping of server-only fields, including status.
See the info here: https://github.com/jrhouston/tfk8s#usage
This will avoid the need for any hacks described above.
To re-iterate @red8888's point: stripping the status field does not cover all use cases. There are CRs out there that should be applied including their status field in order to be valid.
Custom block devices under OpenEBS are an example. Omitting the two status fields (claimState
& state
) for those will result in unusable resources. You could say that's a mistake on OpenEBS' side, not making their operators forgiving enough. But since kubectl
does allow us to set the status directly, it would be nice if kubernetes_manifest
could do so as well.
For the moment I worked around this limitation with:
- a
kubernetes_manifest
to create the CR without the status; - a
null_resouce
(that depends on thekubernetes_manifest
) with alocal_exec
provisioner that executeskubectl patch
to set the status. Perhaps this is possible with thekubectl
provider as well -- I haven't bothered to check.
@mvoitko First of all: Slava Ukraini! 💙 💛
To your observation, which design are you referring to? If you are converting your YAML manifests with our recommended tool (https://github.com/jrhouston/tfk8s), then it has a
-s
flag to enable stripping of server-only fields, including status. See the info here: https://github.com/jrhouston/tfk8s#usageThis will avoid the need for any hacks described above.
Glory to Heroes!
I used flux Terraform provider. Its resources produce manifests as one multiline string
What a silly bug. Who asked for this forbidden-fields "feature?" How about if we just behave exactly the way kubectl does by not forbidding certain fields? 😂
@jrhouston
When applying crds there are numerous occassions where it's impossible to drop the statusfield, as this is actually the definition of this statusfield. E.g. https://github.com/prometheus-community/helm-charts/blob/main/charts/kube-prometheus-stack/crds/crd-alertmanagerconfigs.yaml#L4370
Here is a sample workaround for this. Assuming that the status is always at the end of the resource in the yaml
data "http" "provider_k8s_crud" {
url = "https://<crud-resource(s)>"
request_headers = {
Accept = "application/yaml"
}
}
resource "kubernetes_manifest" "create_k8s_crud" {
# TODO: If this fails check the separator and location of resource status in the source code
for_each = toset(split("---", data.http.provider_k8s_crud.body))
# omit the status from the yaml
manifest = yamldecode(replace(each.value, "/(?s:\nstatus:.*)$/", ""))
depends_on = [
data.http.provider_k8s_crud
]
}
Here's an example of workaround using terraform kustomization provider.
Given the following CRD:
https://raw.githubusercontent.com/jenkinsci/kubernetes-operator/v0.7.1/config/crd/bases/jenkins.io_jenkins.yaml
One can remove the offending status
field with a patch as follows:
data "kustomization_overlay" "jenkins_crds" {
resources = [
"https://raw.githubusercontent.com/jenkinsci/kubernetes-operator/v0.7.1/config/crd/bases/jenkins.io_jenkins.yaml"
]
patches {
patch = yamlencode([{
path: "/status",
op: "remove",
}])
target {
name = "jenkins.jenkins.io"
}
}
}
resource "kubernetes_manifest" "jenkins_crds" {
for_each = data.kustomization_overlay.jenkins_crds.manifests
manifest = jsondecode(each.value)
}
How in the heck is this open for nearly two years with one dismissive response "we don't see any use cases where this is necessary," people provide a plethora of use cases (which make this provider practically unusable in production if they are not supported), and it's still open without a clear resolution?
I'm now faced with the choice of forking the kubectl provider, which works exactly how you'd expect it to and how this provider should work, because my organization will not allow a 3rd party provider that is not hashicorp or an official hashicorp partner, or, doing some crazy workaround like forking, editing, and then maintaining thousands of lines of a chart with the "forbidden" fields stripped?
Are you guys serious?
I was facing the same issue with Gateway CRDs, e.g. https://raw.githubusercontent.com/kubernetes-sigs/gateway-api/v1.0.0/config/crd/standard/gateway.networking.k8s.io_gatewayclasses.yaml
To solve this, I copied the contents to a local file gatewayclasses.yaml
and removed the status
attribute
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
api-approved.kubernetes.io: https://github.com/kubernetes-sigs/gateway-api/pull/2466
gateway.networking.k8s.io/bundle-version: v1.0.0
gateway.networking.k8s.io/channel: standard
creationTimestamp: null
name: gatewayclasses.gateway.networking.k8s.io
spec:
group: gateway.networking.k8s.io
names:
categories:
- gateway-api
kind: GatewayClass
.....
....
- status:
- acceptedNames:
- kind: ""
- plural: ""
- conditions: null
- storedVersions: null
Then created a kubernetes_manifest
resource like below
resource "kubernetes_manifest" "crd_gateway" {
manifest = yamldecode(file("${path.root}/crds/gatewayclasses.yaml"))
}