terraform-provider-rancher2 icon indicating copy to clipboard operation
terraform-provider-rancher2 copied to clipboard

Occasionally managed rke1 cluster re-provisioning fails using terraform provider

Open riuvshyn opened this issue 3 years ago • 4 comments

Rancher Server Setup

  • Rancher version: 2.6.2
  • Installation option (Docker install/Helm Chart):
    • If Helm Chart, Kubernetes Cluster and version (RKE1, RKE2, k3s, EKS, etc): RKE2
  • Proxy/Cert Details:

Information about the Cluster

  • Kubernetes version: 1.21.5
  • Cluster Type (Local/Downstream): local
    • If downstream, what type of cluster? (Custom/Imported or specify provider for Hosted/Infrastructure Provider):

Describe the bug Occasionally managed rke1 cluster re-provisioning fails using terraform provider provider version 1.22.2

To Reproduce

  • provision managed rke1 cluster using terraform provider
  • destroy it
  • re-provision

Result Provisioning fails with following error:

module.cluster.rancher2_cluster.this: Creating...
│ Error: Bad response statusCode [409]. Status [409 Conflict]. Body: [baseType=error, code=AlreadyExists, message=clusterregistrationtokens.management.cattle.io "default-token" already exists] from [https://api.rancher.test.com/v3/clusterregistrationtokens]

Expected Result Managed cluster provisioned sucessfully

Screenshots

Additional context After destroy the default-token goes away from from ClusterRegistrationTokens along with namespace that was created for the new cluster but it still fails with already exists error.

riuvshyn avatar Feb 18 '22 14:02 riuvshyn

I get this sometimes on brand new clusters.

iTaybb avatar Apr 02 '22 20:04 iTaybb

I've gotten it twice in a row just now.

phillamb168 avatar May 28 '22 18:05 phillamb168

Can confirm, we have seen this too, when deploying new cluster. When applying for the second time, TF finishes successfully.
Rancher 2.6.2 and 2.6.5 K8s: v1.21.5-rancher1-1 TF provider:

...
  required_providers {
    rancher2 = {
      source  = "rancher/rancher2"
      version = ">= 1.12.0, < 2.0.0"
    }

elvinasp avatar May 31 '22 06:05 elvinasp

Can confirm, we have seen this too, when deploying new cluster. When applying for the second time, TF finishes successfully. Rancher 2.6.2 and 2.6.5 K8s: v1.21.5-rancher1-1 TF provider:

...
  required_providers {
    rancher2 = {
      source  = "rancher/rancher2"
      version = ">= 1.12.0, < 2.0.0"
    }

This works for me

timBeehexa avatar Jul 21 '22 15:07 timBeehexa

Hitting the same issue randomly

principekiss avatar Sep 09 '22 14:09 principekiss

Ran into this with an rke2 cluster while debugging https://github.com/rancher/terraform-provider-rancher2/issues/993.

jakefhyde avatar Sep 20 '22 04:09 jakefhyde

Closing as duplicated of https://github.com/rancher/terraform-provider-rancher2/issues/1098

jakefhyde avatar Oct 17 '22 22:10 jakefhyde