terraform-provider-rancher2 icon indicating copy to clipboard operation
terraform-provider-rancher2 copied to clipboard

Error with first terraform apply when using `rancher2_cloud_credential` with vSphere

Open vmahe35 opened this issue 2 years ago • 2 comments

Hello,

I am trying to instantiate K3S clusters using the built-in vsphere connector of Rancher, using Terraform.

Here is my code:

terraform {
  required_version = ">= 1.1.0"

  required_providers {
    rancher2 = {
      source  = "rancher/rancher2"
      version = "1.23.0"
    }
  }
}

# Create a new rancher2 Cloud Credential for vSphere
resource "rancher2_cloud_credential" "vsphere_creds" {
  name        = "vsphere_creds"
  description = "Some credentials to access a vCenter server"
  vsphere_credential_config {
    vcenter  = var.vsphere_server // vSphere IP/hostname for vCenter
    username = var.vsphere_user
    password = var.vsphere_password
  }
}

// To have a unique naming identifier for cluster and nodes, we use Random:
resource "random_id" "cluster_instance_id" {
  byte_length = 3
}

# Create vSphere machine config v2
resource "rancher2_machine_config_v2" "vsphere" {
  generate_name = "vsphere-config"

  vsphere_config {
    creation_type = "template"
    clone_from    = var.vm_template_name
    cpu_count   = var.vm_nb_cpu
    memory_size = var.vm_memory_size * 1024
    disk_size   = var.vm_system_disk_size
    datacenter  = var.vsphere_datacenter
    datastore   = var.vsphere_datastore
    folder  = var.vsphere_folder
    network = var.vsphere_network
    pool    = var.vsphere_resource_pool
  }
}

# Create a new rancher v2 vSphere K3S Cluster v2
resource "rancher2_cluster_v2" "cluster_on_vsphere" {
  name                                     = var.cluster_name != "" ? var.cluster_name : "${var.cluster_type_prefix}-on-vsphere-${random_id.cluster_instance_id.hex}"
  kubernetes_version                       = var.k8s_version
  enable_network_policy                    = false
  default_cluster_role_for_project_members = "user"

  // default timeouts are 30 minutes
  timeouts {
    create = "10m"
    update = "10m"
    delete = "10m"
  }

  // The map of Kubernetes labels to be applied to the cluster
  labels = var.labels

  local_auth_endpoint {
    ca_certs = ""    // (Optional) CA certs for the authorized cluster endpoint (string)
    enabled  = false // Enable the authorized cluster endpoint. Default true (bool)
    fqdn     = ""    // (Optional) FQDN for the authorized cluster endpoint (string)
  }

  rke_config {
    machine_pools {
      name                         = "pool1"
      cloud_credential_secret_name = rancher2_cloud_credential.vsphere_creds.id
      control_plane_role           = true
      etcd_role                    = true
      worker_role                  = true
      quantity                     = var.vm_count
      machine_config {
        kind = rancher2_machine_config_v2.vsphere.kind
        name = rancher2_machine_config_v2.vsphere.name
      }
    }
  }
}

I am using this code inside a home made terraform module.

When I use it, the first time I launch the terraform script, I have the following error :

module.k3s_cluster.random_id.cluster_instance_id: Creating...
module.k3s_cluster.random_id.cluster_instance_id: Creation complete after 0s [id=lDod]
module.k3s_cluster.rancher2_cloud_credential.vsphere_creds: Creating...
module.k3s_cluster.rancher2_machine_config_v2.vsphere: Creating...
module.k3s_cluster.rancher2_cloud_credential.vsphere_creds: Creation complete after 3s [id=cattle-global-data:cc-dc78q]
module.k3s_cluster.rancher2_machine_config_v2.vsphere: Creation complete after 5s [id=fleet-default/nc-vsphere-config-rm5fq]

Error: Provider produced inconsistent final plan

When expanding the plan for module.k3s_cluster.rancher2_cluster_v2.cluster_on_vsphere to include new values learned so far during apply, provider "registry.terraform.io/rancher/rancher2" produced an invalid new value for .rke_config[0].machine_pools[0].cloud_credential_secret_name: was cty.StringVal(""), but now cty.StringVal("cattle-global-data:cc-dc78q").

This is a bug in the provider, which should be reported in the provider's own issue tracker.

If I relaunch terraform apply again, it works fine.

Any idea where it comes from ?

vmahe35 avatar Apr 29 '22 14:04 vmahe35

I am still facing the same issue.

rmammadli avatar Jun 09 '22 07:06 rmammadli

This is the same bug list here https://github.com/rancher/terraform-provider-rancher2/issues/835 downgrade the provider to 1.22.2 and try to apply multiple times...

zach-mitchell-rtx avatar Jun 23 '22 15:06 zach-mitchell-rtx