terraform-provider-hcloud icon indicating copy to clipboard operation
terraform-provider-hcloud copied to clipboard

[Bug]: volume automount on attachment doesn't work as expected on server recreation

Open Handaleh opened this issue 4 years ago • 9 comments
trafficstars

What happened?

Auto mount doesn't work when a hcloud_volume_attachment is recreated to attach a currently existing volume to a newly built(recreated) instance(hcloud_server).

My setup is pretty simple:

data "template_file" "user_data" {
  template = file("${path.module}/scripts/user_data.yaml.tpl")
}

resource "hcloud_server" "instances" {
  count       = local.instances
  name        = "test-instance-${count.index}"
  image       = local.os_type
  server_type = local.server_type
  location    = local.location

  ssh_keys = [ var.ssh_key ]
  user_data = data.template_file.user_data.rendered
}

resource "hcloud_volume" "storages" {
  count             = local.instances
  name              = "test-vol-${count.index}"
  size              = local.disk_size
  location          = local.location
  format            = "xfs"
  delete_protection = true
}

resource "hcloud_volume_attachment" "storages_attachments" {
  count     = local.instances
  volume_id = hcloud_volume.storages[count.index].id
  server_id = hcloud_server.instances[count.index].id
  automount = true
}

The first time it's applied, all works as expected:

  • :heavy_check_mark: volume is attached and mounted
  • :heavy_check_mark: my script(user-data.yaml) is also executed successfully(installs and configures few tools on the server)

Now I apply few changes in the script(something like echo "re-run!", now:

  • :heavy_check_mark: The old instance/server and the volume attachment get destroyed
  • :heavy_check_mark: The new instance and the volume attachment are created
  • :stop_sign: The volume is attached but not mounted!

Handaleh avatar Oct 17 '21 22:10 Handaleh

Hi @Handaleh,

I was able to reproduce the issue. According to my tests the problem appears as soon as your user-data contains a runcmd directive. The reason is not the terraform-provider but the way our backend handles automounting. Internally we use a runcmd in cloud init vendor data to trigger the automounting. This is not ideal and we are aware of that. But we cannot give a timeline when we will be able to change this or if we are able to change this as all.

As a workaround can you please try including the following in the runcmd section of your userdata?

udevadm trigger -c add -s block -p ID_VENDOR=HC --verbose -p ID_MODEL=Volume

This is the command we would execute if it would not be overwritten.

fhofherr avatar Nov 17 '21 12:11 fhofherr

@Handaleh @fhofherr

we also stumbled about this issue for the hcloud_volume resource today. I can confirm that the mentioned workaround works too. It would be great if this is no longer needed.

Additionally their should be implemented an argument mount_point to configure the volumes moint point which can be used by other resources. Currently we create a configureable symbolic link on the assumption that the current format /mnt/HC_Volume_<id> wouldn't be changed.

resource "hcloud_server" "node1" {
  name        = "node1"
  image       = "ubuntu-22.04"
  server_type = "cx11"
}

resource "hcloud_volume" "important-data" {
  name        = "important-data"
  size        = 50
  mount_point = "/mnt/my-volume-mount-point"
  server_id   = hcloud_server.node1.id
  automount   = true
  format      = "ext4"
}

pschirch avatar Sep 15 '22 12:09 pschirch

This could probably be averted by specifying a merge_type property merging strategy in the user data file:

merge_type: "list(append)+dict(recurse_list)+str()"

This will cause runcmd directives to be appended, rather than overwritten. This is a rather obscure feature of cloud-init, and curiously, I've just opened a PR to improve the documentation on it: cloudinit.readthedocs.io/en/latest/reference/merging.html

Edit: Seconding the desire for a mount_point argument. Having to figure this out in scripts isn't ideal.

Radiergummi avatar Jul 07 '23 08:07 Radiergummi

This also happen also when creating a server with user_data defined. Volume is created and attached, but the mounting does not take place.

lremes avatar Aug 31 '23 06:08 lremes

This issue has been marked as stale because it has not had recent activity. The bot will close the issue if no further action occurs.

github-actions[bot] avatar Nov 29 '23 12:11 github-actions[bot]

mount_point = "/mnt/my-volume-mount-point"

My two cents for this one! Or something else mitigating current scripts based on implementation details being subject to change.

wirepatch avatar Jan 07 '24 13:01 wirepatch

This could probably be averted by specifying a merge_type property merging strategy in the user data file: ...

Not working for me while above udevadm trigger ... workaround does

wirepatch avatar Jan 07 '24 14:01 wirepatch

I'm guessing this is still an open issue?

christianromeni avatar Mar 02 '24 02:03 christianromeni

I'm guessing this is still an open issue?

Yepp!

wirepatch avatar Mar 19 '24 08:03 wirepatch