terraform-provider-kubernetes
terraform-provider-kubernetes copied to clipboard
An option ignore deletion of a resource
Description
In here, the secret I create using kubernetes_manifest.eck-elastic-superuser is finally controlled by kubernetes_manifest.eck-elasticsearch CRD in Kuberenetes. So when we run terraform destroy, it would first delete kubernetes_manifest.eck-elasticsearch which would delete the secret on Kubernetes cluster and when terraform will try to destroy kubernetes_manifest.eck-elastic-superuser, it will get an error.
So we can add a apply_only flag, that would only add the resource and will ignore it on deletion.
Potential Terraform Configuration
resource "kubernetes_manifest" "eck-elastic-superuser" {
manifest = yamldecode(templatefile("../scripts/eck/eck-elastic-superuser.yaml", {
namespace = kubernetes_namespace.observability.metadata[0].name,
password = data.vault_generic_secret.eck.data["elastic"]
}))
apply_only = true
}
resource "kubernetes_manifest" "eck-elasticsearch" {
depends_on = [kubectl_manifest.eck-elastic-superuser]
manifest = yamldecode(templatefile("../scripts/eck/eck-elasticsearch.yaml", {
namespace = kubernetes_namespace.observability.metadata[0].name
}))
field_manager {
force_conflicts = true
}
}
# ../scripts/eck/eck-elastic-superuser.yaml
apiVersion: v1
kind: Secret
metadata:
name: elasticsearch-es-elastic-user # <CRD Elasticsearch Name>-es-elastic-user
namespace: ${namespace}
data:
elastic: ${base64encode(password)}
# ../scripts/eck/eck-elasticsearch.yaml
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: elasticsearch
namespace: ${namespace}
spec:
version: 8.8.2
auth:
fileRealm:
- secretName: elastic-filebeat-ingestion-user
roles:
- secretName: elastic-filebeat-ingestion-role
nodeSets:
- name: master
count: 1
config:
node:
roles:
- master
volumeClaimTemplates:
- metadata:
name: elasticsearch-data # Do not change this name unless you set up a volume mount for the data path.
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: gp3
podTemplate:
spec:
containers:
- name: elasticsearch
resources:
requests:
cpu: 100m
memory: 2000Mi
limits:
cpu: 500m
memory: 2500Mi
- name: data
count: 1
config:
node:
roles:
- data
volumeClaimTemplates:
- metadata:
name: elasticsearch-data # Do not change this name unless you set up a volume mount for the data path.
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: gp3
podTemplate:
spec:
nodeSelector:
workload-type: observability
containers:
- name: elasticsearch
resources:
requests:
cpu: 100m
memory: 2Gi
limits:
cpu: 500m
memory: 2Gi
References
- https://registry.terraform.io/providers/cpanato/kubectl/latest/docs/resources/kubectl_manifest#apply_only
Community Note
- Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
- If you are interested in working on this issue or have submitted a pull request, please leave a comment
The current work around would be to use terraform state rm and remove the desired resource that you wish to not track. Could you share a bit more info in what your current workflow looks like?
I can explain my current workflow with the code snippets I have shared. I have to create elasticsearch-es-elastic-user to override and set the password manually for the elastic user (the default user created by Elasticsearch) otherwise Elasticsearch CRD auto-creates this secret.
If I don't create the elasticsearch-es-elastic-user secret and it is auto-created by CRD Elasticsearch then terraform destroy goes through without hiccup but when I create the secret manually, then terraform destroy fails because even if I create the secret manually when Elasticsearch CRD is created, it adds annotations and starts managing the secret. So now when I run terraform destroy secret gets deleted with CRD deletion so when terraform tries to delete the secret, it fails as resource does not exist anymore on the cluster but it is in tfstate file.
Would be very helpful. I am using the kubernetes provider in a same way and have test cases with terraform test. These tests fail because the resource can't be deleted after the test finished.
I would suggest a ignore_destroy instead of apply_only because mutable resources could be updated by terraform but only the destroy process will be handed to the controller that owns the resource.