terraform-google-kubernetes-engine
terraform-google-kubernetes-engine copied to clipboard
[beta-private-cluster-update-variant] provider │ "registry.terraform.io/hashicorp/google-beta" produced an invalid new value │ for .node_pool_defaults: block count changed from 0 to 1
TL;DR
When running atlantis apply
, the plan shows an update in the cluster:
# module.platform_eng_environments.module.plato_ziniz_instance.module.gke.google_container_cluster.primary will be updated in-place
~ resource "google_container_cluster" "primary" {
id = "projects/plato-ziniz-103365/locations/us-central1/clusters/ziniz-plato"
name = "ziniz-plato"
# (27 unchanged attributes hidden)
- node_pool_defaults {
}
# (24 unchanged blocks hidden)
}
And the atlantis apply
failed with:
│ Error: Provider produced inconsistent final plan
│
│ When expanding the plan for
│ module.platform_eng_environments.module.plato_ziniz_instance.module.gke.google_container_cluster.primary
│ to include new values learned so far during apply, provider
│ "registry.terraform.io/hashicorp/google-beta" produced an invalid new value
│ for .node_pool_defaults: block count changed from 0 to 1.
│
│ This is a bug in the provider, which should be reported in the provider's
│ own issue tracker.
╵
Expected behavior
The apply should succeed.
Observed behavior
The apply failed with the above error
Terraform Configuration
# Core GKE regional cluster
module "gke" {
source = "terraform-google-modules/kubernetes-engine/google//modules/beta-private-cluster-update-variant"
version = "21.0.0"
project_id = var.project_id
network_project_id = var.admin_project_id
name = local.resource_name
region = var.region
cluster_resource_labels = { "mesh_id" : "proj-${data.google_project.project.number}" }
network = var.admin_network_name
subnetwork = local.gke_subnet
ip_range_pods = local.pod_subnet_range
ip_range_services = local.svc_subnet_range
monitoring_service = "monitoring.googleapis.com/kubernetes"
logging_service = "logging.googleapis.com/kubernetes"
release_channel = "STABLE"
network_policy = true
horizontal_pod_autoscaling = true
remove_default_node_pool = true
enable_shielded_nodes = true
enable_private_nodes = true
enable_private_endpoint = false
master_ipv4_cidr_block = var.apiserver_cidr
master_global_access_enabled = true
master_authorized_networks = [
{
cidr_block = "0.0.0.0/0",
display_name = "everything (TODO)"
},
]
create_service_account = false
node_pools = [
{
name = "default-node-pool"
machine_type = "e2-standard-4"
min_count = 1
max_count = 4
disk_size_gb = 100
disk_type = "pd-standard"
image_type = "COS_CONTAINERD"
auto_upgrade = true
service_account = module.gke_node_service_account.email
enable_secure_boot = true
preemptible = false
},
]
node_pools_oauth_scopes = {
all = []
default-node-pool = [
"https://www.googleapis.com/auth/cloud-platform",
]
}
depends_on = [
module.vpc_subnet,
]
}
### Terraform Version
```sh
terraform {
backend "gcs" {
bucket = ...
}
# Current Verily terraform version can be found at http://go/verily-terraform.
required_version = "~> 1.1.2"
required_providers {
google = {
version = "~> 4.6"
source = "hashicorp/google"
}
google-beta = {
version = "~> 4.6"
source = "hashicorp/google-beta"
}
}
}
### Additional information
_No response_
Seems related to provider issues https://github.com/hashicorp/terraform-provider-google/issues/12422 and https://github.com/hashicorp/terraform-provider-google/issues/12549
Thanks for the report @zinizhu and further info @williamsmt. It looks like a possible upstream issue with a fix opened https://github.com/GoogleCloudPlatform/magic-modules/pull/6546. Could you confirm if you are using the affected provider version?