terraform-provider-google
terraform-provider-google copied to clipboard
Value out of range when trying to run plan
Community Note
- Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request.
- Please do not leave +1 or me too comments, they generate extra noise for issue followers and do not help prioritize the request.
- If you are interested in working on this issue or have submitted a pull request, please leave a comment.
- If an issue is assigned to the
modular-magician
user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If an issue is assigned to a user, that user is claiming responsibility for the issue. If an issue is assigned tohashibot
, a community member has claimed the issue already.
E
Terraform Version
0.12.25
Affected Resource(s)
google_bigquery_dataset
Terraform Configuration Files
provider "google" {
version = "3.22.0"
project = var.gke_project
region = var.gke_region
}
provider "google-beta" {
version = "3.22.0"
project = var.gke_project
region = var.gke_region
}
resource "google_compute_subnetwork" "subnetwork-ip-alias" {
name = "${var.cluster_name}-subnet"
region = var.gke_region
network = var.vpc_self_link
ip_cidr_range = var.ipv4_main_range
secondary_ip_range {
range_name = var.ipv4_pods_range_name
ip_cidr_range = var.ipv4_pods_range
}
secondary_ip_range {
range_name = var.ipv4_services_range_name
ip_cidr_range = var.ipv4_services_range
}
}
resource "google_bigquery_dataset" "dataset" {
dataset_id = replace("gke_usage_${var.cluster_name}", "-", "_")
friendly_name = "gke-usage-${var.cluster_name}"
description = "GKE usage - ${var.cluster_name}"
location = "EU"
project = var.gke_project
labels = {
env = var.env
cluster = var.cluster_name
}
access {
role = "OWNER"
special_group = "projectOwners"
}
access {
role = "READER"
special_group = "projectReaders"
}
access {
role = "WRITER"
special_group = "projectWriters"
}
}
resource "google_container_cluster" "gke_cluster" {
name = var.cluster_name
description = var.cluster_description
location = var.gke_region
min_master_version = var.gke_version
node_version = var.gke_version
enable_kubernetes_alpha = "false"
provider = google-beta
# Remove default node pool
remove_default_node_pool = true
initial_node_count = 1
# Network to which the cluster is connected
network = var.vpc_self_link
subnetwork = google_compute_subnetwork.subnetwork-ip-alias.name
master_auth {
username = ""
password = ""
client_certificate_config {
issue_client_certificate = false
}
}
cluster_autoscaling {
enabled = true
auto_provisioning_defaults {
oauth_scopes = var.node_autoprovisioning_oath_scopes
}
resource_limits {
resource_type = "cpu"
minimum = var.node_autoprovisioning_settings.min_cpu
maximum = var.node_autoprovisioning_settings.max_cpu
}
resource_limits {
resource_type = "memory"
minimum = var.node_autoprovisioning_settings.min_mem
maximum = var.node_autoprovisioning_settings.max_mem
}
}
maintenance_policy {
recurring_window {
start_time = var.default_maintenance_policy_recurring_window.start_time
end_time = var.default_maintenance_policy_recurring_window.end_time
recurrence = var.default_maintenance_policy_recurring_window.recurrence
}
}
ip_allocation_policy {
cluster_secondary_range_name = var.ipv4_pods_range_name
services_secondary_range_name = var.ipv4_services_range_name
}
resource_labels = {
team = "mls"
type = "compute"
environment = var.env
}
vertical_pod_autoscaling {
enabled = false
}
addons_config {
dns_cache_config {
enabled = true
}
}
resource_usage_export_config {
enable_network_egress_metering = false
enable_resource_consumption_metering = true
bigquery_destination {
dataset_id = google_bigquery_dataset.dataset.dataset_id
}
}
}
Debug Output
https://gist.github.com/thecodeassassin/f9e0c436100cefb2028eee96ab9faf18
Panic Output
Crash log doesn't exist.
Expected Behavior
Plan should be created successfully.
Actual Behavior
Terraform exists, we found this error:
module..google_bigquery_dataset.dataset: Refreshing state... [id=projects/***/datasets/gke_usage_europe_west] 2020-06-03T15:46:30.339Z [DEBUG] plugin.terraform-provider-google_v3.24.0_x5: panic: Error reading level state: strconv.ParseInt: parsing "1591104108820": value out of range
resulting in:
Error: rpc error: code = Canceled desc = context canceled
Steps to Reproduce
-
terraform plan
Important Factoids
Things were working fine before. This suddenly stopped working, none of our code eve changd, the entire pipeline just died.
References
- #0000
Hi @thecodeassassin! I'm sorry this stopped working. It looks to be an issue with the SDK, I think, so I have opened up an issue with them. https://github.com/hashicorp/terraform-plugin-sdk/issues/469
Hi @thecodeassassin, I wanted to check to see if this is still an issue? Thanks!
Hello, I have the same issue.
│ Error: Plugin did not respond
│
│ with module.main.google_bigquery_dataset.ds_tmp,
│ on ..\bigquery.tf line 95, in resource "google_bigquery_dataset" "ds_tmp":
│ 95: resource "google_bigquery_dataset" "ds_tmp" {
│
│ The plugin encountered an error, and failed to respond to the plugin.(*GRPCProvider).PlanResourceChange call. The plugin logs may contain more details.
╵
Stack trace from the terraform-provider-google_v4.25.0_x5.exe plugin:
panic: Error reading level state: strconv.ParseInt: parsing "1655974603933": value out of range
It only happens on Windows.
terraform.exe -chdir=iac/main/rec state show module.main.google_bigquery_dataset.ds_tmp
# module.main.google_bigquery_dataset.ds_tmp:
resource "google_bigquery_dataset" "ds_tmp" {
creation_time = 1655974603933
dataset_id = "DS_TMP"
I think that the lines of code are https://github.com/hashicorp/terraform-provider-google/blob/3fafbdf0fb11b09636c2864f953a71ab07492bff/google/resource_bigquery_dataset.go#L831 https://github.com/hashicorp/terraform-provider-google/blob/3fafbdf0fb11b09636c2864f953a71ab07492bff/google/utils.go#L315
I wonder if strconv.ParseInt(v, 10, 64)
is troublesome if the provider is compiled for 32 bits on Windows. (I have a 64 bits system)
@edwardmedia @megan07 do we still believe this is upstream-terraform
?
I created https://github.com/hashicorp/terraform-plugin-sdk/issues/1236 because the original SDK issue was closed
This looks like a blocker, is there any workaround? I have Windows 64 bit and I just can't work with terraform.
Upgraded GCP provider and terraform and no success. There's no option to choose how to run Terraform (64 vs 32) so why it is trying to run it be default using x32?
The problematic thing is: "module": "module.gcp_bigquery", "mode": "managed", "type": "google_bigquery_dataset", we have there "creation_time": 1682188988890,
actually, there are much more ints. Also in google_bigquery_table and in google_compute_managed_ssl_certificate - certificate_id.
You should verify that your Terraform binary was not unintentionally installed as 32 bit- thats the most common scenario we see cause this issue. Notably, i386 appears first on https://developer.hashicorp.com/terraform/install when effectively all users outside of niche scenarios should be using AMD64.
@rileykarson you were right ;) I wonder why thou this happens. Maybe ppl correlate (me included) AMD64 with AMD cpu while i for intel and ppl does not relate to that - I know it's not that, but sometimes I make mind shortcuts ;p Or that first on the left is i386, while as you said, should be niche. Some UX questions that could prevent that mistake in future ;)
Anyway, thanks, helped me! :)
Ah, this is the canonical issue, reopening
I'm gonna edit the parent here to cover the general case. Here's the original parent comment:
Community Note
- Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request.
- Please do not leave +1 or me too comments, they generate extra noise for issue followers and do not help prioritize the request.
- If you are interested in working on this issue or have submitted a pull request, please leave a comment.
- If an issue is assigned to the
modular-magician
user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If an issue is assigned to a user, that user is claiming responsibility for the issue. If an issue is assigned tohashibot
, a community member has claimed the issue already.
E
Terraform Version
0.12.25
Affected Resource(s)
google_bigquery_dataset
Terraform Configuration Files
provider "google" {
version = "3.22.0"
project = var.gke_project
region = var.gke_region
}
provider "google-beta" {
version = "3.22.0"
project = var.gke_project
region = var.gke_region
}
resource "google_compute_subnetwork" "subnetwork-ip-alias" {
name = "${var.cluster_name}-subnet"
region = var.gke_region
network = var.vpc_self_link
ip_cidr_range = var.ipv4_main_range
secondary_ip_range {
range_name = var.ipv4_pods_range_name
ip_cidr_range = var.ipv4_pods_range
}
secondary_ip_range {
range_name = var.ipv4_services_range_name
ip_cidr_range = var.ipv4_services_range
}
}
resource "google_bigquery_dataset" "dataset" {
dataset_id = replace("gke_usage_${var.cluster_name}", "-", "_")
friendly_name = "gke-usage-${var.cluster_name}"
description = "GKE usage - ${var.cluster_name}"
location = "EU"
project = var.gke_project
labels = {
env = var.env
cluster = var.cluster_name
}
access {
role = "OWNER"
special_group = "projectOwners"
}
access {
role = "READER"
special_group = "projectReaders"
}
access {
role = "WRITER"
special_group = "projectWriters"
}
}
resource "google_container_cluster" "gke_cluster" {
name = var.cluster_name
description = var.cluster_description
location = var.gke_region
min_master_version = var.gke_version
node_version = var.gke_version
enable_kubernetes_alpha = "false"
provider = google-beta
# Remove default node pool
remove_default_node_pool = true
initial_node_count = 1
# Network to which the cluster is connected
network = var.vpc_self_link
subnetwork = google_compute_subnetwork.subnetwork-ip-alias.name
master_auth {
username = ""
password = ""
client_certificate_config {
issue_client_certificate = false
}
}
cluster_autoscaling {
enabled = true
auto_provisioning_defaults {
oauth_scopes = var.node_autoprovisioning_oath_scopes
}
resource_limits {
resource_type = "cpu"
minimum = var.node_autoprovisioning_settings.min_cpu
maximum = var.node_autoprovisioning_settings.max_cpu
}
resource_limits {
resource_type = "memory"
minimum = var.node_autoprovisioning_settings.min_mem
maximum = var.node_autoprovisioning_settings.max_mem
}
}
maintenance_policy {
recurring_window {
start_time = var.default_maintenance_policy_recurring_window.start_time
end_time = var.default_maintenance_policy_recurring_window.end_time
recurrence = var.default_maintenance_policy_recurring_window.recurrence
}
}
ip_allocation_policy {
cluster_secondary_range_name = var.ipv4_pods_range_name
services_secondary_range_name = var.ipv4_services_range_name
}
resource_labels = {
team = "mls"
type = "compute"
environment = var.env
}
vertical_pod_autoscaling {
enabled = false
}
addons_config {
dns_cache_config {
enabled = true
}
}
resource_usage_export_config {
enable_network_egress_metering = false
enable_resource_consumption_metering = true
bigquery_destination {
dataset_id = google_bigquery_dataset.dataset.dataset_id
}
}
}
Debug Output
https://gist.github.com/thecodeassassin/f9e0c436100cefb2028eee96ab9faf18
Panic Output
Crash log doesn't exist.
Expected Behavior
Plan should be created successfully.
Actual Behavior
Terraform exists, we found this error:
module..google_bigquery_dataset.dataset: Refreshing state... [id=projects/***/datasets/gke_usage_europe_west] 2020-06-03T15:46:30.339Z [DEBUG] plugin.terraform-provider-google_v3.24.0_x5: panic: Error reading level state: strconv.ParseInt: parsing "1591104108820": value out of range
resulting in:
Error: rpc error: code = Canceled desc = context canceled
Steps to Reproduce
-
terraform plan
Important Factoids
Things were working fine before. This suddenly stopped working, none of our code eve changd, the entire pipeline just died.
References
- #0000
b/304968076