terraform-provider-kubernetes
terraform-provider-kubernetes copied to clipboard
Error: Cycle from kubernetes_job in Terraform
Terraform Version, Provider Version and Kubernetes Version
Terraform version: 1.15
Kubernetes provider version: 2.10
Kubernetes version: 1.18
Affected Resource(s)
- kubernetes_job
I have the following kubernetes_job defined like:
resource "kubernetes_job" "demo" {
metadata {
name = "demo"
}
spec {
template {
metadata {}
spec {
container {
name = "pi"
image = "perl"
command = ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
}
restart_policy = "Never"
}
}
backoff_limit = 1
}
wait_for_completion = true
timeouts {
create = "2m"
update = "2m"
}
}
Which is supposed to just get completed once and be done with it... which it does, but a re-occurring error keeps appearing:
Error: Cycle: module.kubernetes_job.demo
I have found that the job in Kubernetes is listed as "complete" but the respective pod has disappeared, so to "fix" this, I have to delete the job and terraform plan and apply again.
Is there a way to have the job (even though it says completed), start a new pod if the existing one disappears without this cycle error? Or could this be a bug with the provider?
Hi @jonathanprior,
I have tried to execute your example as a standalone code and cannot reproduce it.
The issue here is with the image you use. For some reason, the latest version of the perl image is not going the job right. The job will never be complete with it and the corresponding container has errored status. Once you switch to the version perl:5.34.0 from the Kubernetes documentation example it will work as expected. Exactly the same behavior I am getting when using kubectl with YAML manifests -- latest perl image is not working, perl:5.34.0 works well.
All failed / successful pods remain on the cluster.
I don't see any issues with the provider at this moment.
A few questions here:
- Does it work with the YAML manifest?
- Do you see that any pods were created by your job?
kubectl describe job <NAME>should tell you this. - Before you execute your TF code, run
kubectl get pods -wand watch if any corresponding pods are created for the job. You can also add-o yamlto get more details about them. - Check the cluster events, some clues you can find there.
I hope that helps.
Thanks.
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.