viya4-iac-azure
viya4-iac-azure copied to clipboard
kubectl v1.19.9 not supported in ANY Azure Location
Terraform Version Details
malepr@cldlgn04:~/public/fsbu-v4aks-mm-mrm/viya4-iac-azure$ ./files/tools/iac_tooling_version.sh { "terraform_version": ""1.0.0"", "terraform_revision": "null", "terraform_outdated": "true", "provider_selections": "{"registry.terraform.io/hashicorp/azuread":"1.5.0","registry.terraform.io/hashicorp/azurerm":"2.62.0","registry.terraform.io/hashicorp/cloudinit":"2.2.0","registry.terraform.io/hashicorp/external":"2.1.0","registry.terraform.io/hashicorp/kubernetes":"2.3.1","registry.terraform.io/hashicorp/local":"2.1.0","registry.terraform.io/hashicorp/null":"3.1.0","registry.terraform.io/hashicorp/template":"2.2.0","registry.terraform.io/hashicorp/tls":"3.1.0"}" }
Terraform Variable File Details
# !NOTE! - These are only a subset of CONFIG-VARS.md provided as examples.
# Customize this file to add any variables from 'CONFIG-VARS.md' whose default values you
# want to change.
# **************** REQUIRED VARIABLES ****************
# These required variables' values MUST be provided by the User
prefix = "fsbu-v4aks-mm-mrm" # this is a prefix that you assign for the resources to be created
location = "centralus" # e.g., "eastus2"
ssh_public_key = "~/.ssh/azure/id_rsa.pub" # Name of file with public ssh key for connecting to the VMs
# **************** REQUIRED VARIABLES ****************
# !NOTE! - Without specifying your CIDR block access rules, ingress traffic
# to your cluster will be blocked by default. In a SCIM environment,
# the AzureActiveDirectory service tag must be granted access to port
# 443/HTTPS for the ingress IP address.
# ************** RECOMMENDED VARIABLES ***************
# tags in azure
tags = { "owner" = "$USER", "resourceowner" = "$USER" , project_name = "fsbu-v4aks-mm-mrm", environment = "dev" }
## Admin Access
# IP Ranges allowed to access all created cloud resources
default_public_access_cidrs = ["109.232.56.224/27", "149.173.0.0/16", "194.206.69.176/28", "98.42.140.5/32"]
create_static_kubeconfig = true
#default_public_access_cidrs = [] # e.g., ["123.45.6.89/32"]
# ************** RECOMMENDED VARIABLES ***************
# Tags can be specified matching your tagging strategy.
# tags = {} # for example: { "owner|email" = "<you>@<domain>.<com>", "key1" = "value1", "key2" = "value2" }
# Postgres config - By having this entry a database server is created. If you do not
# need an external database server remove the 'postgres_servers'
# block below.
postgres_servers = {
default = {},
}
# Azure Container Registry config
create_container_registry = false
container_registry_sku = "Standard"
container_registry_admin_enabled = false
# AKS config
kubernetes_version = "1.19.9"
default_nodepool_min_nodes = 2
default_nodepool_vm_type = "Standard_D8s_v4"
# AKS Node Pools config
node_pools = {
cas = {
"machine_type" = "Standard_E16s_v3"
"os_disk_size" = 200
"min_nodes" = 1
"max_nodes" = 1
"max_pods" = 110
"node_taints" = ["workload.sas.com/class=cas:NoSchedule"]
"node_labels" = {
"workload.sas.com/class" = "cas"
}
},
compute = {
"machine_type" = "Standard_E16s_v3"
"os_disk_size" = 200
"min_nodes" = 1
"max_nodes" = 1
"max_pods" = 110
"node_taints" = ["workload.sas.com/class=compute:NoSchedule"]
"node_labels" = {
"workload.sas.com/class" = "compute"
"launcher.sas.com/prepullImage" = "sas-programming-environment"
}
},
connect = {
"machine_type" = "Standard_E16s_v3"
"os_disk_size" = 200
"min_nodes" = 1
"max_nodes" = 1
"max_pods" = 110
"node_taints" = ["workload.sas.com/class=connect:NoSchedule"]
"node_labels" = {
"workload.sas.com/class" = "connect"
"launcher.sas.com/prepullImage" = "sas-programming-environment"
}
},
stateless = {
"machine_type" = "Standard_D16s_v3"
"os_disk_size" = 200
"min_nodes" = 1
"max_nodes" = 2
"max_pods" = 110
"node_taints" = ["workload.sas.com/class=stateless:NoSchedule"]
"node_labels" = {
"workload.sas.com/class" = "stateless"
}
},
stateful = {
"machine_type" = "Standard_D8s_v3"
"os_disk_size" = 200
"min_nodes" = 1
"max_nodes" = 3
"max_pods" = 110
"node_taints" = ["workload.sas.com/class=stateful:NoSchedule"]
"node_labels" = {
"workload.sas.com/class" = "stateful"
}
}
}
# Jump Server
create_jump_public_ip = true
jump_vm_admin = "jumpuser"
jump_vm_machine_type = "Standard_B2s"
# Storage for SAS Viya CAS/Compute
storage_type = "standard"
# required ONLY when storage_type is "standard" to create NFS Server VM
create_nfs_public_ip = false
nfs_vm_admin = "nfsuser"
nfs_vm_machine_type = "Standard_D8s_v4"
nfs_raid_disk_size = 128
nfs_raid_disk_type = "Standard_LRS"
# Azure Monitor
create_aks_azure_monitor = false
Steps to Reproduce
Hello Experts!
I'm running the terraform apply and I'm running into an issue where the kubectl version required v1.19.9 is incompatible with AKS in ANY region. See the error below, and the script Ii made to FIND ANY region that has it. None support it.
I created a little shell script: malepr@cldlgn04:~/public/fsbu-v4aks-mm-mrm/viya4-iac-azure$ cat ./bf_aksversion.sh #!/bin/bash for region in eastus, westeurope, francecentral, francesouth, centralus, canadaeast, canadacentral, uksouth, ukwest, westcentralus, westus, westus2, australiaeast, australiacentral, australiasoutheast, northeurope, japaneast, japanwest, koreacentral, koreasouth, eastus2, southcentralus, northcentralus, southeastasia, southindia, centralindia, eastasia, southafricanorth, brazilsouth, brazilsoutheast, australiacentral2, jioindiacentral, jioindiawest, swedencentral, westus3, germanynorth, germanywestcentral, switzerlandnorth, switzerlandwest, uaenorth, uaecentral, norwayeast, norwaywest do echo "region $region" >> ./bf_aksversion.txt az aks get-versions --location $region --output table >> ./bf_aksversion.txt done
See attached file for review :)
bf_aksversion.txt
│ Error: creating Managed Kubernetes Cluster "fsbu-v4aks-mm-mrm-aks" (Resource Group "fsbu-v4aks-mm-mrm-rg"): containerservice.ManagedClustersClient#CreateOrUpdate: Failure sending request: StatusCode=0 -- Original Error: Code="AgentPoolK8sVersionNotSupported" Message="Version 1.19.9 is not supported in this region. Please use [az aks get-versions] command to get the supported version list in this region. For more information, please check https://aka.ms/supported-version-list" │ │ with module.aks.azurerm_kubernetes_cluster.aks, │ on modules/azure_aks/main.tf line 27, in resource "azurerm_kubernetes_cluster" "aks": │ 27: resource "azurerm_kubernetes_cluster" "aks" {
Expected Behavior
kubectl version required by this package and AKS should be aligned and this version mismatch error should not have occurred
Actual Behavior
module.jump[0].azurerm_linux_virtual_machine.vm: Creation complete after 49s [id=/subscriptions/fd027923-0ba6-4fb3-8d64-623608ea2a44/resourceGroups/fsbu-v4aks-mm-mrm-rg/providers/Microsoft.Compute/virtualMachines/fsbu-v4aks-mm-mrm-jump-vm] ╷ │ Error: creating Managed Kubernetes Cluster "fsbu-v4aks-mm-mrm-aks" (Resource Group "fsbu-v4aks-mm-mrm-rg"): containerservice.ManagedClustersClient#CreateOrUpdate: Failure sending request: StatusCode=0 -- Original Error: Code="AgentPoolK8sVersionNotSupported" Message="Version 1.19.9 is not supported in this region. Please use [az aks get-versions] command to get the supported version list in this region. For more information, please check https://aka.ms/supported-version-list" │ │ with module.aks.azurerm_kubernetes_cluster.aks, │ on modules/azure_aks/main.tf line 27, in resource "azurerm_kubernetes_cluster" "aks": │ 27: resource "azurerm_kubernetes_cluster" "aks" { │
Additional Context
I have used this error message from listing regions to populate the regions I checked:
malepr@cldlgn04:~/public/fsbu-v4aks-mm-mrm/viya4-iac-azure$ az aks get-versions --location unitedstates --output table (NoRegisteredProviderFound) No registered resource provider found for location 'unitedstates' and API version '2019-04-01' for type 'locations/orchestrators'. The supported api-versions are '2017-09-30, 2019-04-01, 2019-06-01, 2019-08-01, 2019-10-01, 2019-11-01, 2020-01-01, 2020-02-01, 2020-03-01, 2020-04-01, 2020-06-01, 2020-07-01, 2020-09-01, 2020-11-01, 2020-12-01, 2021-02-01, 2021-03-01, 2021-05-01, 2021-07-01, 2021-08-01'. The supported locations are 'eastus, westeurope, francecentral, francesouth, centralus, canadaeast, canadacentral, uksouth, ukwest, westcentralus, westus, westus2, australiaeast, australiacentral, australiasoutheast, northeurope, japaneast, japanwest, koreacentral, koreasouth, eastus2, southcentralus, northcentralus, southeastasia, southindia, centralindia, eastasia, southafricanorth, brazilsouth, brazilsoutheast, australiacentral2, jioindiacentral, jioindiawest, swedencentral, westus3, germanynorth, germanywestcentral, switzerlandnorth, switzerlandwest, uaenorth, uaecentral, norwayeast, norwaywest'.
References
No response
Code of Conduct
- [x] I agree to follow this project's Code of Conduct
Hi @mleprince018 you can run this command:
az aks get-versions --location eastus --output table
KubernetesVersion Upgrades
------------------- -----------------------
1.21.2 None available
1.21.1 1.21.2
1.20.9 1.21.1, 1.21.2
1.20.7 1.20.9, 1.21.1, 1.21.2
1.19.13 1.20.7, 1.20.9
1.19.11 1.19.13, 1.20.7, 1.20.9
Where you can change eastus
to any viable location and get the current list of kubernetes versions.
To work around the problem you cat adjust the following variable in your tfvars file:
# AKS config
kubernetes_version = "1.19.13"
Hi Thomas,
Thank you - I've updated the terraform.tfvars file, rebuilt the terraform plan, and applied it.
Unfortunately I cannot consider this a complete success and resolution as AKS seems to have run into an error. I'll update you later if this completes successfully.
module.aks.azurerm_kubernetes_cluster.aks: Still creating... [6m1s elapsed]
╷
│ Error: waiting for creation of Managed Kubernetes Cluster "fsbu-v4aks-mm-mrm-aks" (Resource Group "fsbu-v4aks-mm-mrm-rg"): Code="ProvisioningControlPlaneError" Message="AKS encountered an internal error while attempting the requested Creating operation. AKS will continuously retry the requested operation until successful or a retry timeout is hit. Check back to see if the operation requires resubmission. Correlation ID: 1f650d94-7a26-12a3-f1c9-5bbbb21b4932, Operation ID: 7795b569-583b-4981-bc04-9a75347c574f, Timestamp: 2021-09-14T20:53:12Z."
│
│ with module.aks.azurerm_kubernetes_cluster.aks,
│ on modules/azure_aks/main.tf line 27, in resource "azurerm_kubernetes_cluster" "aks":
│ 27: resource "azurerm_kubernetes_cluster" "aks" {
│
Yup, that's an AKS issue. You'd have to check the portal and see if there are any issues there.
Just wanted to close this out - modifying the below parameter, then rebuilding the terraform and applying it fixed the issue.
kubernetes_version = "1.19.13"
Adding this one back in as the default value does need to be changed.
Marking as closed as kubernetes_version
is now set to 1.23.8