terraform-provider-linode
terraform-provider-linode copied to clipboard
[Bug]: Cannot RE-allocate storage
Terraform Version
Terraform 1.5.7 on Debian Stable
Linode Provider Version
version = "2.9.0"
Effected Terraform Resources
linode_instance_config linode_instance_disk linode_instance
Terraform Config Files
Demonstrator at: https://github.com/gesker/bugdemonstrator01
Short version...
resource "linode_instance_config" "cfg_instance_config" {
count = var.cfg_node_count
label = "cfg-instance-config-${count.index}"
linode_id = linode_instance.cfg_instance[count.index].id
booted = true
devices {
sda {
disk_id = linode_instance_disk.cfg_instance_boot[count.index].id # Second Apply Fails
}
}
# interface {
# purpose = "public"
# }
#
# interface {
# purpose = "vlan"
# label = "my-vlan"
# ipam_address = "10.0.1.${count.index}/24"
# }
}
Debug Output
No response
Panic Output
Not panic just error:
│ Error: failed to create linode instance disk: [400] [size] You do not have enough unallocated storage to create this Disk.
│
│ with linode_instance_disk.cfg_instance_boot[0],
│ on linode_instance_cfg.tf line 1, in resource "linode_instance_disk" "cfg_instance_boot":
│ 1: resource "linode_instance_disk" "cfg_instance_boot" {
│
╵
Expected Behavior
Resource should be reapplied with out error
Actual Behavior
Unallocated storage error
Deleteing the instance via the web ui and then reapplying allows the terraform routines allows the plan to move forward.
Steps to Reproduce
- Clone demonstrator here.
- Add entries to terraform/backend.cfg
- Add good info where REDACTED in terraform/terraform.tfvars
- make tf_full_reset
- make tf_apply -- this is a SECOND apply is where failure occurs
Steps 2 and 3 above are just because I removed passwords, tokens, etc.
Hi @gesker, thanks for reporting on this issue! Are you removing the other storage device before attempting to create the other one? It looks like the error generated is coming from your account as opposed to the Terraform provider.
Happens on second apply May or may not be related but I did notice that destroying/provisioning using device/devices seems to take longer. Just a data point.
How much time is in between tf_full_reset and tf_apply? There's a chance that the device is still in the process of being removed after running tf_apply.
couple of hours
On Mon, Oct 9, 2023 at 11:04 AM Ania Misiorek @.***> wrote:
How much time is in between tf_full_reset and tf_apply? There's a chance that the device is still in the process of being removed after running tf_apply.
— Reply to this email directly, view it on GitHub https://github.com/linode/terraform-provider-linode/issues/1060#issuecomment-1753365030, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABUGXHXKLROS3YDH3E557K3X6QVBDAVCNFSM6AAAAAA5XXRR6OVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTONJTGM3DKMBTGA . You are receiving this because you were mentioned.Message ID: @.***>
I see, we'll take a better look at this issue and let you know of any updates.
Check the allocated storage on the instance in the Linode UI. I worked around this by only ever allocating half the storage on a linode.
Hi @gesker and @lattwood, I think you would need to simply remove image
, authorized_keys
, and root_pass
from the linode_instance
resource named cfg_instance
in the Terraform repository shared above.
If you specify these attributes on a linode_instance
resource, Linode will implicitly create a disk with a config alongside with the linode_instance
resource, which cause no space left for an additional disk to be created with a linode_instance_disk
resource on a Linode.
Doing so will allow the linode_instance_disk
resource to be the only disk on the Linode. You can then set these attributes on the disk resource.
Here is a Terraform code snippet captured from my modification of the sample resources shared.
resource "linode_instance_disk" "cfg_instance_boot" {
count = var.cfg_node_count
label = "cfg-instance-disk-boot-${count.index}"
linode_id = linode_instance.cfg_instance[count.index].id
# size = linode_instance.cfg_instance[count.index].specs.0.disk
size = 5000
filesystem = "ext4"
image = var.cfg_image_type
root_pass = var.root_password
authorized_keys = var.authorized_keys
}
resource "linode_instance" "cfg_instance" {
count = var.cfg_node_count
label = "cfg-instance-${count.index}"
region = var.linode_region
type = var.cfg_node_type
backups_enabled = var.cfg_backups_enabled
private_ip = false
# resize_disk = true
watchdog_enabled = true
# image = var.cfg_image_type
# authorized_keys = var.authorized_keys
# root_pass = var.root_password
}
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days