terraform-provider-hcloud
terraform-provider-hcloud copied to clipboard
[Bug]: Recreating hcloud_server with primary IP created via extra resource fails
What happened?
When I create primary IPs via the hcloud_primary_ip
resource, and attach them to a hcloud_server
in the public_net
block, a recreate of the server will fail. This is due to the API taking some time to reflect that the primary IP has been detached from the destroyed server, but the hcloud_server
resource is already reporting to be "destroyed" to terraform. Terraform will then try to create the new server and immediately fail due to the API complaining about the primary IP still being in use.
If you re-run terraform after a few seconds it will work.
What did you expect to happen?
I would expect the hcloud_server
resource to wait for the primary IP assignment to fully clear out before reporting the deletion to be successful to Terraform.
This should avoid the issue.
Please provide a minimal working example
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
-/+ destroy and then create replacement
Terraform will perform the following actions:
# hcloud_rdns.primary_v4 must be replaced
-/+ resource "hcloud_rdns" "primary_v4" {
~ id = "s-23118961-168.xxx" -> (known after apply)
~ server_id = 23118961 -> (known after apply) # forces replacement
# (2 unchanged attributes hidden)
}
# hcloud_rdns.primary_v6 must be replaced
-/+ resource "hcloud_rdns" "primary_v6" {
~ id = "s-23118961-2a01:xxx" -> (known after apply)
~ server_id = 23118961 -> (known after apply) # forces replacement
# (2 unchanged attributes hidden)
}
# hcloud_server.primary must be replaced
-/+ resource "hcloud_server" "primary" {
+ backup_window = (known after apply)
~ id = "23118961" -> (known after apply)
~ ipv4_address = "168.xxx" -> (known after apply)
~ ipv6_address = "2a01:xxx" -> (known after apply)
~ ipv6_network = "2a01:xxx/64" -> (known after apply)
- labels = {} -> null
~ location = "nbg1" -> (known after apply)
name = "xxx"
~ status = "running" -> (known after apply)
~ user_data = "xxx=" -> "yyy" # forces replacement
# (11 unchanged attributes hidden)
# (1 unchanged block hidden)
}
Plan: 3 to add, 0 to change, 3 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
hcloud_rdns.primary_v6: Destroying... [id=s-23118961-2a01:xxx]
hcloud_rdns.primary_v4: Destroying... [id=s-23118961-168.xxx
hcloud_rdns.primary_v6: Destruction complete after 2s
hcloud_rdns.primary_v4: Destruction complete after 3s
hcloud_server.primary: Destroying... [id=23118961]
hcloud_server.primary: Destruction complete after 0s
hcloud_server.primary: Creating...
╷
│ Error: primary ip already assigned to another server (primary_ip_assigned)
│
│ with hcloud_server.primary,
│ on server.tf line 17, in resource "hcloud_server" "primary":
│ 17: resource "hcloud_server" "primary" {
│
╵
ERRO[0018] 1 error occurred:
* exit status 1
Got hit by the same issue as this basically breaks the workflow of replacing nodes while keeping their network identity.
A possible solution would be to check here if the set primary IPv4 or IPv6 have auto_delete
disabled. If this is the case, the server is first stopped, then the correspoding unassign actions are performed and then the actual deletion procedure can be triggered.
This should be a solution which does not require any changes of the actual Hetzner Cloud API.
I also just hit this and it's kind of blocking. I don't see a way around it short of dropping Terraform and using directly the Hetzner Cloud API.
Same here, makes it impossible to use. The only way, in this case, to manually solve is to run plan
and apply
twice (in the second run the primary IP would already be unassigned.)
The hcloud_server
delete code does not correctly wait for the server delete action to complete before returning. Because of this the new server resource is created before the old server is actually fully deleted.
We are currently working on a fix for this.