terraform-provider-rancher2
terraform-provider-rancher2 copied to clipboard
Occasionally managed rke1 cluster re-provisioning fails using terraform provider
Rancher Server Setup
- Rancher version: 2.6.2
- Installation option (Docker install/Helm Chart):
- If Helm Chart, Kubernetes Cluster and version (RKE1, RKE2, k3s, EKS, etc): RKE2
- Proxy/Cert Details:
Information about the Cluster
- Kubernetes version: 1.21.5
- Cluster Type (Local/Downstream): local
- If downstream, what type of cluster? (Custom/Imported or specify provider for Hosted/Infrastructure Provider):
Describe the bug Occasionally managed rke1 cluster re-provisioning fails using terraform provider provider version 1.22.2
To Reproduce
- provision managed rke1 cluster using terraform provider
- destroy it
- re-provision
Result Provisioning fails with following error:
module.cluster.rancher2_cluster.this: Creating...
│ Error: Bad response statusCode [409]. Status [409 Conflict]. Body: [baseType=error, code=AlreadyExists, message=clusterregistrationtokens.management.cattle.io "default-token" already exists] from [https://api.rancher.test.com/v3/clusterregistrationtokens]
Expected Result Managed cluster provisioned sucessfully
Screenshots
Additional context
After destroy the default-token
goes away from from ClusterRegistrationTokens
along with namespace that was created for the new cluster but it still fails with already exists error.
I get this sometimes on brand new clusters.
I've gotten it twice in a row just now.
Can confirm, we have seen this too, when deploying new cluster. When applying for the second time, TF finishes successfully.
Rancher 2.6.2 and 2.6.5
K8s: v1.21.5-rancher1-1
TF provider:
...
required_providers {
rancher2 = {
source = "rancher/rancher2"
version = ">= 1.12.0, < 2.0.0"
}
Can confirm, we have seen this too, when deploying new cluster. When applying for the second time, TF finishes successfully. Rancher 2.6.2 and 2.6.5 K8s: v1.21.5-rancher1-1 TF provider:
... required_providers { rancher2 = { source = "rancher/rancher2" version = ">= 1.12.0, < 2.0.0" }
This works for me
Hitting the same issue randomly
Ran into this with an rke2 cluster while debugging https://github.com/rancher/terraform-provider-rancher2/issues/993.
Closing as duplicated of https://github.com/rancher/terraform-provider-rancher2/issues/1098