terraform-google-kubernetes-engine
terraform-google-kubernetes-engine copied to clipboard
[beta-private-cluster-update-variant] produced an invalid new value for .node_locations
TL;DR
When we deploy the same beta-private-cluster-update-variant module the second time (with changes in other resources), it produces an error "[w]hen expanding the plan for module.foo_cluster.google_container_cluster.primary to include new values learned so far during apply, provider "registry.terraform.io/hashicorp/google-beta" produced an invalid new value for .node_locations: planned set element cty.StringVal("us-central1-a") does not correlate with any element in actual."
This issue goes away if we provide values for both zones and node_pools.node_locations. However, we do not want to hard-code those values.
Expected behavior
Since there is no actual change to the module, we expect the cluster to remain the same without error.
Observed behavior
The following error is observed in the tf-apply log
Step #2 - "Apply": Error: Provider produced inconsistent final plan Step #2 - "Apply": Step #2 - "Apply": When expanding the plan for Step #2 - "Apply": module.simhos_cluster.google_container_cluster.primary to include new values Step #2 - "Apply": learned so far during apply, provider Step #2 - "Apply": "registry.terraform.io/hashicorp/google-beta" produced an invalid new value Step #2 - "Apply": for .node_locations: planned set element cty.StringVal("us-central1-a") does Step #2 - "Apply": not correlate with any element in actual. Step #2 - "Apply": Step #2 - "Apply": This is a bug in the provider, which should be reported in the provider's own Step #2 - "Apply": issue tracker. Step #2 - "Apply": Finished Step #2 - "Apply"
Terraform Configuration
module "{{.module_name}}" {
source = "terraform-google-modules/kubernetes-engine/google//modules/beta-private-cluster-update-variant"
version = "~> 21.2.0"
depends_on = [module.project]
# Required.
name = "{{.cluster_name}}"
project_id = "{{.project_id}}"
region = "{{.region}}"
regional = true
network_project_id = "{{.network_project_id}}"
network = "{{.network}}"
subnetwork = "{{.subnet}}"
ip_range_pods = "pods-range"
ip_range_services = "services-range"
add_cluster_firewall_rules = true
master_ipv4_cidr_block = "{{.master_ipv4_cidr_block}}"
istio = false
skip_provisioners = true
enable_private_endpoint = true
release_channel = "STABLE"
network_policy = true
# Removing the default node pull, as it cannot be modified without destroying the cluster.
remove_default_node_pool = true
issue_client_certificate = false
deploy_using_private_endpoint = true
# Private nodes better control public exposure, and reduce the
# ability of nodes to reach to the Internet without additional configurations.
enable_private_nodes = true
# Allow the cluster master to be accessible globally (from any region).
master_global_access_enabled = true
# master_authorized_networks can be specified to restrict access to the public endpoint.
# Also see https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters.
enable_binary_authorization = true
# Workload Identity is enabled by default in beta-private-cluster-update-variant.
# And identity_namespace is set to [project_id].svc.id.goog and node_metadata to GKE_METADATA_SERVER.
master_authorized_networks = [
{
display_name: "cloudbuild"
cidr_block: "{{.cloud_build_pool_range}}"
}
]
node_pools = [
{
name = "default-node-pool"
machine_type = "e2-medium"
min_count = 1
max_count = 20
local_ssd_count = 0
disk_size_gb = 100
disk_type = "pd-standard"
image_type = "COS_CONTAINERD"
auto_repair = true
auto_upgrade = true
service_account = "{{.service_account}}"
preemptible = false
initial_node_count = 1
enable_secure_boot = true
},
]
}
Terraform Version
0.14.9
Additional information
No response
Hi @yuatgoogle What version of the provider are you using? Also could your try a newer TF version to see if its a core bug they fixed?
We are on hashicorp/google-beta v4.18.0. I looked at the release notes of this repo, and I don't see anything regarding node_pool or private-cluster-update-variant since version 21.2.0.
I will try upgrading Terraform, although we are using some deprecated features, so it may or may not be viable.
@bharathkkb We were able to deploy with Terraform 1.2.7 (latest release), and we still run into the same issue.
This issue is stale because it has been open 60 days with no activity. Remove stale label or comment or this will be closed in 7 days
Bumping to avoid closure, I guess?
This issue is stale because it has been open 60 days with no activity. Remove stale label or comment or this will be closed in 7 days
Sneaky bot not taking a Christmas break.