terraform-provider-ovh
terraform-provider-ovh copied to clipboard
[BUG] ovh_cloud_project_kube_nodepool default timeouts too low
Describe the bug
create nodepool will often fail due to timeout.
Terraform Version
N/A
OVH Terraform Provider Version
ovh/ovh v0.42.0
Affected Resource(s)
- ovh_cloud_project_kube_nodepool
Terraform Configuration Files
resource "ovh_cloud_project_kube_nodepool" "node_pool" {
service_name = "${var.service_name}"
kube_id = ovh_cloud_project_kube.my_kube_cluster.id
name = "nodepool"
flavor_name = "d2-8"
desired_nodes = 3
max_nodes = 3
min_nodes = 3
}
Expected Behavior
Terraform will wait until nodepool is crated, then continue execution.
Actual Behavior
Terraform throws a timeout error if creation takes more than 20 minutes. Pool is eventually created but terraform execution is halted on the error, leaving behind a terraform state out of sync with the created resources.
Steps to Reproduce
-
terraform apply
Workaround
Can be worked around by declaring specific timeouts in the resource specification:
resource "ovh_cloud_project_kube_nodepool" "node_pool" {
timeouts {
create = "1h"
update = "1h"
delete = "1h"
}
service_name = "${var.service_name}"
kube_id = ovh_cloud_project_kube.my_kube_cluster.id
name = "nodepool"
flavor_name = "d2-8"
desired_nodes = 3
max_nodes = 3
min_nodes = 3
}
Additional context
OVH will very often not spin up nodes within the default 20 minute timeout. While default timeout can be overridden, I suggest increasing the default timeout to reflect a realistic range of creation times. This will remove the need for adding an override, and reduce friction for new users.