terraform-provider-rancher2
terraform-provider-rancher2 copied to clipboard
[BUG] `rancher2_cluster` incorrectly allows multiple node pools to utilize the same name prefix
Rancher Server Setup
- Rancher version:
v2.8.0
- Installation option (Docker install/Helm Chart): Docker
- If Helm Chart, Kubernetes Cluster and version (RKE1, RKE2, k3s, EKS, etc):
- Proxy/Cert Details:
Information about the Cluster
- Kubernetes version:
v1.27.8-rancher2-1
- Cluster Type (Local/Downstream): Downstream
- If downstream, what type of cluster? (Custom/Imported or specify provider for Hosted/Infrastructure Provider): AWS
User Information
- What is the role of the user logged in? (Admin/Cluster Owner/Cluster Member/Project Owner/Project Member/Custom) Cluster Owner
- If custom, define the set of permissions:
Provider Information
- What is the version of the Rancher v2 Terraform Provider in use? 3.2.0
- What is the version of Terraform in use? v1.6.6
Describe the bug
When provisioning an RKE1 node driver cluster using the rancher2 provider, you're able to have the same name prefix name for multiple node pools:
When provisioning an RKE1 cluster directly within Rancher, you're unable to proceed with provisioning the downstream cluster until this is addressed. However, this does not seem to be the issue here as there is no logic that stops this from happening.
To Reproduce
- In your
main.tf
file, have multiple node pools that have the same name as seen below:
########################
# CREATE ETCD NODE POOL
########################
resource "rancher2_node_pool" "etcd_node_pool" {
cluster_id = rancher2_cluster.cluster.id
name = var.etcd_node_pool_name
hostname_prefix = var.node_hostname_prefix
node_template_id = rancher2_node_template.node_template.id
quantity = var.etcd_node_pool_quantity
control_plane = false
etcd = true
worker = false
}
########################
# CREATE CP NODE POOL
########################
resource "rancher2_node_pool" "control_plane_node_pool" {
cluster_id = rancher2_cluster.cluster.id
name = var.control_plane_node_pool_name
hostname_prefix = var.node_hostname_prefix
node_template_id = rancher2_node_template.node_template.id
quantity = var.control_plane_node_pool_quantity
control_plane = true
etcd = false
worker = false
}
########################
# CREATE WORKER NODE POOL
########################
resource "rancher2_node_pool" "worker_node_pool" {
cluster_id = rancher2_cluster.cluster.id
name = var.worker_node_pool_name
hostname_prefix = var.node_hostname_prefix
node_template_id = rancher2_node_template.node_template.id
quantity = var.worker_node_pool_quantity
control_plane = false
etcd = false
worker = true
}
- Run
terraform apply
and validate that the cluster provisions successfully.
Actual Result
The RKE1 downstream cluster provisions successfully despite the same node pool names.
Expected Result
The RKE1 downstream cluster SHOULD NOT provision with multiple node pool names being the same.