terraform-provider-rancher2
terraform-provider-rancher2 copied to clipboard
bug: hcloud server_labels are not added to the server
Hi there,
I have tried adding server_labels
to my node_template to distinguish between the roles a server has been given. So far all my tests have been unsuccessful, so I guess there is a problem.
My config:
variable "node_pool" {
type = list(object({
name = string // name of the node template
node_prefix = string // prefix for the node name
server_type = string
description = string
labels = map(string)
firewall_rules = list(string) // list of firewall rule ids to apply
count = number
role = object({
master = bool
worker = bool
etcd = bool
})
}))
description = "Node pool configuration"
}
resource "rancher2_node_driver" "hetzner_node_driver" {
active = true
builtin = false
name = "Hetzner"
ui_url = "https://storage.googleapis.com/hcloud-rancher-v2-ui-driver/component.js"
url = "https://github.com/JonasProgrammer/docker-machine-driver-hetzner/releases/download/3.7.1/docker-machine-driver-hetzner_3.7.1_linux_amd64.tar.gz"
whitelist_domains = ["storage.googleapis.com"]
}
resource "rancher2_node_template" "my_hetzner_node_template" {
for_each = { for np in var.node_pool : np.name => np }
name = format("%s-%s-%s-%s", var.machine_location, each.value.role.master == true ? "master" : (each.value.role.etcd == true ? "etcd" : "worker"), each.value.server_type, each.value.name)
driver_id = var.node_driver_id
hetzner_config {
api_token = var.hcloud_token
networks = local.hcloud_network_id
use_private_network = var.use_private_network
image_id = data.hcloud_image.custom.id
server_location = var.machine_location
server_type = each.value.server_type
userdata = templatefile("${path.module}/files/cloud-init.yml", {
sshd_port = 22
})
firewalls = each.value.firewall_rules
additional_keys = var.hcloud_machine_additional_public_keys
server_labels = {
cluster = var.cluster_name
ismaster = each.value.role.master
isetcd = each.value.role.etcd
isworker = each.value.role.worker
project = var.cluster_name
}
}
}
Terraform plan outcome:
+ hetzner_config {
+ additional_keys = [
+ "ssh-key-1",
+ "ssh-key-2".
]
+ api_token = (sensitive value)
+ firewalls = (known after apply)
+ image = "ubuntu-20.04"
+ image_id = "15512617"
+ networks = "<id>"
+ server_labels = {
+ "cluster" = "<cluster-name>"
+ "isetcd" = "false"
+ "ismaster" = "false"
+ "isworker" = "true"
+ "project" = "<project-name>"
}
+ server_location = "nbg1"
+ server_type = "cpx31"
+ use_private_network = true
+ userdata = <<-EOT
#cloud-config
packages:
- ufw
- fail2ban
package_update: true
package_upgrade: true
runcmd:
- sleep 10 && systemctl restart sshd
write_files:
- content: |
Port 22
PermitRootLogin prohibit-password
PasswordAuthentication no
AuthenticationMethods publickey
MaxAuthTries 5
AllowAgentForwarding no
AllowStreamLocalForwarding no
AllowTcpForwarding yes
X11Forwarding no
path: /etc/ssh/sshd_config.d/hardened.conf
EOT
}
@process0 can you validate that this functionality is not working?
The best way to check is to go to rancher and view the node template in the API. I do notice that the hetznerConfig.serverLabels
field is not set when viewed from the API. I'm not familiar enough with the rancher API -> node driver connection to say if this is a bug or not because you can label a node by setting the label on the node template itself, rancher2_node_template.labels
.
Though this raises the question of whether or not the node template labels should be separated from the node labels. Personally I think they should be separated, but it might not be so easy to implement that consistently over all drivers.
This field was introduced in https://github.com/rancher/terraform-provider-rancher2/pull/851. I've tried to patch the node template from the rancher API, and couldn't get this to work. Maybe I'm missing something. I'm on the 3.7.1 driver.
@stefandanaita whats your input?
I have the same problem
hetzner_config {
api_token = var.hcloud_token
image = var.k8s_node_image
server_location = var.k8s_node_location
use_private_network = false
server_labels = {
terraform = true
k8s_node = true
}
server_type = "cpx31"
}
And still nothing in rancher nodeTemplates api
Driver version: 3.7.1
We discovered the same problem in our project. Driver-Version: 3.7.1 Rancher-Version: 2.6.5
I found a solution, by compiling Rancher2 myself on my machine until the merge Requests are handeld one day.
My solution includes the missing fields for the Firewalls, AdditionalKeys and the PlacementGroup, as suggested here: https://github.com/rancher/terraform-provider-rancher2/pull/894
The Code in File schema_node_template_hetzner.go Line 17:
ServerLabel []string `json:"serverLabel,omitempty" yaml:"serverLabel,omitempty"`
Lines 41 - 45
"server_label": {
Type: schema.TypeString,
Optional: true,
Description: "Comma-separated list of labels which will be assigned to the server",
},
In File structure_node_template_hetzner.go Line 21
if len(in.ServerLabel) > 0 {
obj["server_label"] = strings.Join(in.ServerLabel, ",")
}
Line 87
if v, ok := in["server_label"].(string); ok && len(v) > 0 {
obj.ServerLabel = strings.Split(v, ",")
}
The TERRAFORM-File contains this then.
source "rancher2_node_template" "hetzner_create_template" {
labels = {
cluster=each.key
}
hetzner_config {
server_label = "cluster=${each.key}"
}
}
I do use the same values for labels as for server_label as I found using the GUI that both values are set with the very same values. So I just wanted to make sure, without further analysing if this is necessary.
The main findings are:
- The Rancher2 API uses an array with a comma-separated list for the information
- The correct field-name is server_label -> without an trailing "s" (!)