terraform-provider-kubernetes
terraform-provider-kubernetes copied to clipboard
enable force_update instead of destroy and then recreate for CRD
Description
Hello, we are using terraform to deploy a kafka cluster handled by strimzi operator (this request can be useful for many other cases) through a Custom Resource Definition handled by the operator itself. The problem is that, modifying a simple config on the crd through kubernetes_manifest module, destroys completely (causing service to be down) and then recreate the kafka object, while if applying the same yaml with kubectl, the strimzi operator will do a blue/green deployment of the new configuration.
Saying it differently: we need a way to only update some terraform resources, so that an operator handles itself if it should be destroyed/recreated/updated_only...
Potential Terraform Configuration
resource "kubernetes_manifest" "xxx" {
manifest = yamldecode(file("kube_manifests/kafka/strimzi-kafka-cluster2.yaml"))
}
with: kube_manifests/kafka/strimzi-kafka-cluster2.yaml =>
apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
name: xxx
namespace: yyy
spec:
kafka:
config:
default.replication.factor: 4
...
if i change "default.replication.factor" using kubernetes_manifest:
Actual behavior
terraform plan -target=kubernetes_manifest.xxx
...
Plan: 1 to add, 0 to change, 1 to destroy.
and then the whole kafka cluster is destroyed an the recreated:
kubectl -n yyy get kafka, pods
No resources found in yyy namespace.
What we want
terraform plan -target=kubernetes_manifest.xxx
...
Plan: 0 to add, 1 to change, 0 to destroy.
so that the strimzi operator (like any other operator), handles it (like it does if we update its definition through kubectl):
kubectl -n yyy get kafka,pods
NAME DESIRED KAFKA REPLICAS DESIRED ZK REPLICAS READY WARNINGS
kafka.kafka.strimzi.io/xxx. 3 3
NAME READY STATUS RESTARTS AGE
pod/xxx-kafka-0 1/1 Running 0 12m
pod/xxx-kafka-1 0/1 Init:0/1 0 5s
pod/xxx-kafka-2 1/1 Running 0 12m
...
Proposal
adding a parameter like: force_update_only at resource level in kubernetes_manifest
resource "kubernetes_manifest" "xxx" {
manifest = yamldecode(file("kube_manifests/kafka/strimzi-kafka-cluster2.yaml"))
force_update_only = true
}
References
Community Note
- Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
- If you are interested in working on this issue or have submitted a pull request, please leave a comment
@cytar I know this won't address the issue with this particular resource type directly, but I had a much better time using kubectl_manifest.
It does what you need already (and doesn't require a special flag as changes are tracked) and instead has a "force_new" flag when that is desired.
Similar Issue to #2375