terraform-provider-rancher2
terraform-provider-rancher2 copied to clipboard
[BUG] `rancher2_etcd_backup` is not populating the Kubernetes version when creating a snapshot
Rancher Server Setup
- Rancher version:
v2.8.0
- Installation option (Docker install/Helm Chart): Docker
- If Helm Chart, Kubernetes Cluster and version (RKE1, RKE2, k3s, EKS, etc): RKE1
- Proxy/Cert Details:
Information about the Cluster
- Kubernetes version:
v1.27.8-rancher2-1
- Cluster Type (Local/Downstream): Downstream
- If downstream, what type of cluster? (Custom/Imported or specify provider for Hosted/Infrastructure Provider): Linode
User Information
- What is the role of the user logged in? (Admin/Cluster Owner/Cluster Member/Project Owner/Project Member/Custom) Cluster Owner
- If custom, define the set of permissions:
Provider Information
- What is the version of the Rancher v2 Terraform Provider in use? 3.2.0
- What is the version of Terraform in use? v1.6.6
Describe the bug
When using the rancher2_etcd_backup
resource block with a provisioned RKE1 downstream cluster, the ETCD snapshot is created as expected. However, the issue is that the kubernetesVersion does not get populated as expected. This causes issues as when you attempt to restore the created snapshot, you are unable to do so as you receive an error.
To Reproduce
- Setup Rancher.
- Provision an RKE1 downstream cluster as a standard user using the
rancher2_cluster
resource block. - Once provisioned, use the
rancher2_etcd_backup
resource block; validate that the Kubernetes version is blank. - See the below
main.tf
as a reference:
terraform {
required_providers {
rancher2 = {
source = "rancher/rancher2"
version = "3.2.0"
}
}
}
provider "rancher2" {
api_url = var.rancher_api_url
token_key = var.rancher_admin_bearer_token
insecure = true
}
########################
# CREATE RKE1 CLUSTER
########################
resource "rancher2_cluster" "cluster" {
name = var.cluster_name
default_pod_security_admission_configuration_template_name = var.default_pod_security_admission_configuration_template_name
rke_config {
kubernetes_version = var.kubernetes_version
network {
plugin = var.network_plugin
}
}
}
########################
# CREATE ROLE TEMPLATE
########################
resource "rancher2_cluster_role_template_binding" "cluster_role_template_binding" {
name = "security-issue"
cluster_id = rancher2_cluster.cluster.id
role_template_id = "projects-view"
user_id = "u-c6ll8"
}
########################
# CREATE NODE TEMPLATE
########################
resource "rancher2_node_template" "node_template" {
name = var.node_template_name
amazonec2_config {
access_key = var.aws_access_key
secret_key = var.aws_secret_key
ami = var.aws_ami
region = var.aws_region
security_group = [var.aws_security_group_name]
subnet_id = var.aws_subnet_id
vpc_id = var.aws_vpc_id
zone = var.aws_zone
root_size = var.aws_root_size
instance_type = var.aws_instance_type
}
}
########################
# CREATE ETCD NODE POOL
########################
resource "rancher2_node_pool" "etcd_node_pool" {
cluster_id = rancher2_cluster.cluster.id
name = var.etcd_node_pool_name
hostname_prefix = var.node_hostname_prefix
node_template_id = rancher2_node_template.node_template.id
quantity = var.etcd_node_pool_quantity
control_plane = false
etcd = true
worker = false
}
########################
# CREATE CP NODE POOL
########################
resource "rancher2_node_pool" "control_plane_node_pool" {
cluster_id = rancher2_cluster.cluster.id
name = var.control_plane_node_pool_name
hostname_prefix = var.node_hostname_prefix
node_template_id = rancher2_node_template.node_template.id
quantity = var.control_plane_node_pool_quantity
control_plane = true
etcd = false
worker = false
}
########################
# CREATE WORKER NODE POOL
########################
resource "rancher2_node_pool" "worker_node_pool" {
cluster_id = rancher2_cluster.cluster.id
name = var.worker_node_pool_name
hostname_prefix = var.node_hostname_prefix
node_template_id = rancher2_node_template.node_template.id
quantity = var.worker_node_pool_quantity
control_plane = false
etcd = false
worker = true
}
########################
# CREATE ETCD BACKUP
########################
resource "rancher2_etcd_backup" "etcd_backup" {
backup_config {
enabled = true
interval_hours = 20
retention = 10
}
cluster_id = rancher2_cluster.cluster.id
manual = true
name = var.backup_name
}
Actual Result
Creating an etcd backup should successfully populate the Kubernetes version.
Expected Result
Creating an etcd backup DOES NOT populate the Kubernetes version.