terraform-provider-hcloud
terraform-provider-hcloud copied to clipboard
[Bug]: Internal IP overwrite with every apply
What happened?
Not sure if it's a bug or wrong implementation on my side, but every time I run apply the provider replaces internal IP of each host, hence the internal IP set inside VM differs from IP shown and used in GUI and it leads to LB not routing traffic to targets, because internally hosts have new IPs or even span same IP over two nodes.
What did you expect to happen?
Provider shouldn't replace IP of VM
Please provide a minimal working example
prod.tfvars:
nodes = {
node1 = {
name = "de-hc-rancher-prod01"
role = ["controlplane", "worker", "etcd"]
server_image = "centos-7"
server_type = "cx31"
datacenter_id = 0
},
node2 = {
name = "fi-hc-rancher-prod02"
role = ["controlplane", "worker", "etcd"]
server_image = "centos-7"
server_type = "cx31"
datacenter_id = 1
},
node3 = {
name = "de-hc-rancher-prod03"
role = ["controlplane", "worker", "etcd"]
server_image = "centos-7"
server_type = "cx31"
datacenter_id = 2
}
}
ssh_keys = {
Sebastian = "ssh-rsa AAxxxQrEw==",
deploy = "ssh-rsa AAxxxp3lc="
}
variables.tf:
variable "nodes" {
description = "Map of node objects including their name, role and server_type"
type = map(object({
name = string,
role = list(string),
server_type = string,
server_image = string,
datacenter_id = number,
}))
}
variable "ssh_keys" {
description = "Map of SSH keys allowed to log in to server. Key is the name of user, value is the SSH public key"
}
main.tf:
resource "hcloud_network" "lan" {
name = "default"
ip_range = "192.168.0.0/16"
}
resource "hcloud_network_subnet" "lan" {
network_id = hcloud_network.lan.id
type = "cloud"
network_zone = "eu-central"
ip_range = "192.168.1.0/24"
}
resource "hcloud_ssh_key" "default" {
for_each = var.ssh_keys
name = each.key
public_key = each.value
}
data "hcloud_datacenters" "ds" {
}
resource "hcloud_server" "server" {
for_each = var.nodes
name = each.value.name
image = each.value.server_image
server_type = each.value.server_type
datacenter = element(data.hcloud_datacenters.ds.names, each.value.datacenter_id)
ssh_keys = keys(var.ssh_keys)
labels = { "role" : "kubernetes" }
network {
network_id = hcloud_network.lan.id
}
depends_on = [
hcloud_network_subnet.lan,
hcloud_network.lan,
]
}
I have also encountered this issue today, seems like a bug with the provider where it doesn't track the network object of hcloud_server properly.
However there is a straightforward workaround for everyone encountering this issue:
Delete the network object and create a new hcloud_server_network resource where you can bind the server ID with your subnet ID, essentially achieving the same effect.
Here is how it would look like in the example from @vaisov
- network {
- network_id = hcloud_network.lan.id
- }
+ resource "hcloud_server_network" "privatenet" {
+ server_id = hcloud_server.server.id
+ subnet_id = hcloud_network_subnet.lan.id
+ }
For adding multiple servers into the subnet you can either use the count metavar or just create more of these hcloud_server_network resources.
Can confirm the bug as well - looks like it's "not just me".
Workaround: hardcode the private IP to the network:
network {
network_id = hcloud_network.internal.id
+ ip = "10.6.1.2"
}
With this, the changes stop.
The hcloud_server_network looks like the better choice for new deployments.
The
hcloud_server_networklooks like the better choice for new deployments.
Problem with that solution is: It detaches and attaches the server to the network on every terraform run :(
The
hcloud_server_networklooks like the better choice for new deployments.Problem with that solution is: It detaches and attaches the server to the network on every terraform run :(
ditch, that... i had the old network still inside the server block...
This is really annoying, as now I have to;
- create the hcloud_server
- attach it to the network using the hcloud_server_network
- do additional provisioning on hcloud_server, because the subnet is needed for my use case to work
If the server would not be re-attached everytime I apply my configuration, this could all be done in one step.
Having the same issue, we're using an automated process to start up a cluster. Now it requires manual intervention to properly attach servers to a network.
Have the same problem here. The changes also result in a disconnect of the k8s nodes that are running on that server... Is anyone from Hetzner watching the issues here anymore? Quite old as other reports...
Same for me , I want to use private ips only and thats not possible with hetzner_server_network, Absolute nightmare is to deploy something with that proivider , something so simple as private ips only.
i have the same issue and a workaround:
lifecycle { ignore_changes = [network] }
with this, a change in network is ignored, als long as you dont change network membership this will workaround it
This looks like the same bug as #556, which will be fixed by #593.
Please feel free to reopen or create a new issue if the problem persists.