terraform-provider-azurerm
terraform-provider-azurerm copied to clipboard
max_surge is required, but not a vaild parameter for spot node pools
Is there an existing issue for this?
- [X] I have searched the existing issues
Community Note
- Please vote on this issue by adding a :thumbsup: reaction to the original issue to help the community and maintainers prioritize this request
- Please do not leave comments along the lines of "+1", "me too" or "any updates", they generate extra noise for issue followers and do not help prioritize the request
- If you are interested in working on this issue or have submitted a pull request, please leave a comment and review the contribution guide to help.
Terraform Version
1.8.5
AzureRM Provider Version
v3.109.0
Affected Resource(s)/Data Source(s)
azurerm_kubernetes_cluster_node_pool
Terraform Configuration Files
resource "azurerm_kubernetes_cluster_node_pool" "test" {
name = "test"
kubernetes_cluster_id = local.id
enable_host_encryption = true
eviction_policy = "Delete"
vm_size = "Standard_D16s_v3"
priority = "Spot"
node_taints = ["kubernetes.azure.com/scalesetpriority=spot:NoSchedule"]
node_labels = {
"kubernetes.azure.com/scalesetpriority" = "spot"
}
orchestrator_version = local.kubernetes_version
os_disk_type = "Ephemeral"
enable_auto_scaling = true
max_pods = 110
min_count = 0
max_count = 30
upgrade_settings = {
max_surge = "50%"
}
vnet_subnet_id = local.subnet.id
zones = ["1", "2", "3"]
}
Debug Output/Panic Output
this is pretty straightforward without debug
Expected Behaviour
If I omit upgrade_setttings entirely, the resource is marked as changed on every apply
~ upgrade_settings {}
If I supply upgrade_settings with an empty object (upgrade_settings {}) then it tells me that max_surge is a required parameter
If I supply upgrade_settings with a max surge, as above, I get the error in actual behavior.
I would expect one of 2 behaviors
- Upgrade settings can be omitted and the resource is not marked as changed on every apply so I don't have to explain it to everyone who tries to run the module and says "Did you change this?"
- Upgrade settings can be provided with an empty object.
Actual Behaviour
With no upgrade_settings:
# azurerm_kubernetes_cluster_node_pool.test will be updated in-place
~ resource "azurerm_kubernetes_cluster_node_pool" "test" {
id = "xxx"
name = "test"
tags = {}
# (33 unchanged attributes hidden)
- upgrade_settings {}
}
With upgrade settings supplied, but empty like this:
upgrade_settings = {}
I get this error:
╷
│ Error: Missing required argument
│
│ on testtf line 23, in resource "azurerm_kubernetes_cluster_node_pool" "test":
│ 23: upgrade_settings {
│
│ The argument "max_surge" is required, but no definition was found.
╵
With the config above, the plan says:
# azurerm_kubernetes_cluster_node_pool.test will be updated in-place
~ resource "azurerm_kubernetes_cluster_node_pool" "test" {
id = "xxx"
name = "test"
tags = {}
# (33 unchanged attributes hidden)
~ upgrade_settings {
+ max_surge = "50%"
}
}
but I get this error
│ Error: updating Node Pool Agent Pool (Subscription: "xxxxx"
│ Resource Group Name: "rg-test"
│ Managed Cluster Name: "test"
│ Agent Pool Name: "test"): performing CreateOrUpdate: unexpected status 400 (400 Bad Request) with response: {
│ "code": "InvalidParameter",
│ "details": null,
│ "message": "The value of parameter agentPoolProfile.upgrade.maxSurge is invalid. Error details: Spot pools can't set max surge. Please see https://aka.ms/aks-naming-rules for more details.",
│ "subcode": "",
│ "target": "agentPoolProfile.upgrade.maxSurge"
│ }
│
│ with azurerm_kubernetes_cluster_node_pool.test,
│ on testl.tf line 1, in resource "azurerm_kubernetes_cluster_node_pool" "test":
│ 1: resource "azurerm_kubernetes_cluster_node_pool" "test" {
│
╵
Steps to Reproduce
run a terraform apply with a node pool that either does not specify an upgrade_settings, does not supply an upgrade_settings.max_surge, or does so on a spot nodepool, as described above.
Important Factoids
No response
References
No response