terraform-provider-kubectl
terraform-provider-kubectl copied to clipboard
Error: Cycle on destroy
When I make some changes in my kubectl_manifest resource, I get the following error on terraform apply
Error: Cycle: kubectl_manifest.workers (destroy), google_container_cluster.sidekiq_workers, provider["registry.terraform.io/gavinbunney/kubectl"]
Here is my Terraform config:
provider "kubectl" {
host = google_container_cluster.sidekiq_workers.endpoint
cluster_ca_certificate = base64decode(google_container_cluster.sidekiq_workers.master_auth[0].cluster_ca_certificate)
token = data.google_client_config.provider.access_token
load_config_file = false
}
resource "kubectl_manifest" "workers_1" {
wait = true
yaml_body = yamlencode({
apiVersion: "apps/v1",
kind: "Deployment"
metadata: {
name: "workers_1"
namespace: "default"
labels: {
app: "workers_1"
}
},
spec: {
replicas: 1,
selector: {
matchLabels: {
app: "workers_1"
}
},
template: {
metadata: {
labels: {
app: "workers_1"
}
},
spec: {
containers: [
{
name: "worker-1",
image: local.api_image,
env: concat(local.cloud_run_envs, [
{ name = "DB_SOCKET", value = "/var/run/mysqld/mysqld.sock" },
{ name = "DB_HOST", value = google_sql_database_instance.database.private_ip_address },
{ name = "RUNTIME", value = "worker" }
])
}
]
}
}
}
})
}
If I manually remove the container from GKE web UI, then the deployment works
seeing similar behavior when trying to recreate resources. @scleriot did you find a solution?