terraform-provider-google icon indicating copy to clipboard operation
terraform-provider-google copied to clipboard

Value out of range when trying to run plan

Open thecodeassassin opened this issue 4 years ago • 2 comments

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request.
  • Please do not leave +1 or me too comments, they generate extra noise for issue followers and do not help prioritize the request.
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment.
  • If an issue is assigned to the modular-magician user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If an issue is assigned to a user, that user is claiming responsibility for the issue. If an issue is assigned to hashibot, a community member has claimed the issue already.

E

Terraform Version

0.12.25

Affected Resource(s)

google_bigquery_dataset

Terraform Configuration Files

provider "google" {
  version = "3.22.0"
  project = var.gke_project
  region  = var.gke_region
}

provider "google-beta" {
  version = "3.22.0"
  project = var.gke_project
  region  = var.gke_region
}

resource "google_compute_subnetwork" "subnetwork-ip-alias" {
  name          = "${var.cluster_name}-subnet"
  region        = var.gke_region
  network       = var.vpc_self_link
  ip_cidr_range = var.ipv4_main_range

  secondary_ip_range {
    range_name    = var.ipv4_pods_range_name
    ip_cidr_range = var.ipv4_pods_range
  }

  secondary_ip_range {
    range_name    = var.ipv4_services_range_name
    ip_cidr_range = var.ipv4_services_range
  }
}

resource "google_bigquery_dataset" "dataset" {
  dataset_id    = replace("gke_usage_${var.cluster_name}", "-", "_")
  friendly_name = "gke-usage-${var.cluster_name}"
  description   = "GKE usage - ${var.cluster_name}"
  location      = "EU"
  project       = var.gke_project

  labels = {
    env     = var.env
    cluster = var.cluster_name
  }

  access {
    role          = "OWNER"
    special_group = "projectOwners"
  }
  access {
    role          = "READER"
    special_group = "projectReaders"
  }
  access {
    role          = "WRITER"
    special_group = "projectWriters"
  }
}

resource "google_container_cluster" "gke_cluster" {
  name                    = var.cluster_name
  description             = var.cluster_description
  location                = var.gke_region
  min_master_version      = var.gke_version
  node_version            = var.gke_version
  enable_kubernetes_alpha = "false"
  provider                = google-beta

  # Remove default node pool
  remove_default_node_pool = true
  initial_node_count       = 1

  # Network to which the cluster is connected
  network    = var.vpc_self_link
  subnetwork = google_compute_subnetwork.subnetwork-ip-alias.name

  master_auth {
    username = ""
    password = ""

    client_certificate_config {
      issue_client_certificate = false
    }
  }

  cluster_autoscaling {
    enabled = true

    auto_provisioning_defaults {
      oauth_scopes = var.node_autoprovisioning_oath_scopes
    }

    resource_limits {
      resource_type = "cpu"
      minimum       = var.node_autoprovisioning_settings.min_cpu
      maximum       = var.node_autoprovisioning_settings.max_cpu
    }

    resource_limits {
      resource_type = "memory"
      minimum       = var.node_autoprovisioning_settings.min_mem
      maximum       = var.node_autoprovisioning_settings.max_mem
    }
  }


  maintenance_policy {
    recurring_window {
      start_time = var.default_maintenance_policy_recurring_window.start_time
      end_time   = var.default_maintenance_policy_recurring_window.end_time
      recurrence = var.default_maintenance_policy_recurring_window.recurrence
    }
  }

  ip_allocation_policy {
    cluster_secondary_range_name  = var.ipv4_pods_range_name
    services_secondary_range_name = var.ipv4_services_range_name
  }

  resource_labels = {
    team        = "mls"
    type        = "compute"
    environment = var.env
  }

  vertical_pod_autoscaling {
    enabled = false
  }

  addons_config {
    dns_cache_config {
      enabled = true
    }
  }

  resource_usage_export_config {
    enable_network_egress_metering       = false
    enable_resource_consumption_metering = true

    bigquery_destination {
      dataset_id = google_bigquery_dataset.dataset.dataset_id
    }
  }
}

Debug Output

https://gist.github.com/thecodeassassin/f9e0c436100cefb2028eee96ab9faf18

Panic Output

Crash log doesn't exist.

Expected Behavior

Plan should be created successfully.

Actual Behavior

Terraform exists, we found this error:

module..google_bigquery_dataset.dataset: Refreshing state... [id=projects/***/datasets/gke_usage_europe_west] 2020-06-03T15:46:30.339Z [DEBUG] plugin.terraform-provider-google_v3.24.0_x5: panic: Error reading level state: strconv.ParseInt: parsing "1591104108820": value out of range

resulting in:

Error: rpc error: code = Canceled desc = context canceled

Steps to Reproduce

  1. terraform plan

Important Factoids

Things were working fine before. This suddenly stopped working, none of our code eve changd, the entire pipeline just died.

References

  • #0000

thecodeassassin avatar Jun 03 '20 15:06 thecodeassassin

Hi @thecodeassassin! I'm sorry this stopped working. It looks to be an issue with the SDK, I think, so I have opened up an issue with them. https://github.com/hashicorp/terraform-plugin-sdk/issues/469

megan07 avatar Jun 04 '20 15:06 megan07

Hi @thecodeassassin, I wanted to check to see if this is still an issue? Thanks!

megan07 avatar Mar 16 '22 19:03 megan07

Hello, I have the same issue.

│ Error: Plugin did not respond
│
│   with module.main.google_bigquery_dataset.ds_tmp,
│   on ..\bigquery.tf line 95, in resource "google_bigquery_dataset" "ds_tmp":
│   95: resource "google_bigquery_dataset" "ds_tmp" {
│
│ The plugin encountered an error, and failed to respond to the plugin.(*GRPCProvider).PlanResourceChange call. The plugin logs may contain more details.
╵

Stack trace from the terraform-provider-google_v4.25.0_x5.exe plugin:

panic: Error reading level state: strconv.ParseInt: parsing "1655974603933": value out of range

It only happens on Windows.

terraform.exe -chdir=iac/main/rec state show module.main.google_bigquery_dataset.ds_tmp
# module.main.google_bigquery_dataset.ds_tmp:
resource "google_bigquery_dataset" "ds_tmp" {
    creation_time                   = 1655974603933
    dataset_id                      = "DS_TMP"

I think that the lines of code are https://github.com/hashicorp/terraform-provider-google/blob/3fafbdf0fb11b09636c2864f953a71ab07492bff/google/resource_bigquery_dataset.go#L831 https://github.com/hashicorp/terraform-provider-google/blob/3fafbdf0fb11b09636c2864f953a71ab07492bff/google/utils.go#L315

I wonder if strconv.ParseInt(v, 10, 64) is troublesome if the provider is compiled for 32 bits on Windows. (I have a 64 bits system)

radureau avatar Nov 05 '22 12:11 radureau

@edwardmedia @megan07 do we still believe this is upstream-terraform?

melinath avatar May 26 '23 22:05 melinath

I created https://github.com/hashicorp/terraform-plugin-sdk/issues/1236 because the original SDK issue was closed

slevenick avatar Aug 28 '23 23:08 slevenick

This looks like a blocker, is there any workaround? I have Windows 64 bit and I just can't work with terraform.

Upgraded GCP provider and terraform and no success. There's no option to choose how to run Terraform (64 vs 32) so why it is trying to run it be default using x32?

The problematic thing is: "module": "module.gcp_bigquery", "mode": "managed", "type": "google_bigquery_dataset", we have there "creation_time": 1682188988890,

actually, there are much more ints. Also in google_bigquery_table and in google_compute_managed_ssl_certificate - certificate_id.

Boardtale avatar Jan 02 '24 13:01 Boardtale

You should verify that your Terraform binary was not unintentionally installed as 32 bit- thats the most common scenario we see cause this issue. Notably, i386 appears first on https://developer.hashicorp.com/terraform/install when effectively all users outside of niche scenarios should be using AMD64.

rileykarson avatar Jan 02 '24 14:01 rileykarson

@rileykarson you were right ;) I wonder why thou this happens. Maybe ppl correlate (me included) AMD64 with AMD cpu while i for intel and ppl does not relate to that - I know it's not that, but sometimes I make mind shortcuts ;p Or that first on the left is i386, while as you said, should be niche. Some UX questions that could prevent that mistake in future ;)

Anyway, thanks, helped me! :)

Boardtale avatar Jan 13 '24 10:01 Boardtale

Ah, this is the canonical issue, reopening

slevenick avatar Jan 16 '24 14:01 slevenick

I'm gonna edit the parent here to cover the general case. Here's the original parent comment:


Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request.
  • Please do not leave +1 or me too comments, they generate extra noise for issue followers and do not help prioritize the request.
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment.
  • If an issue is assigned to the modular-magician user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If an issue is assigned to a user, that user is claiming responsibility for the issue. If an issue is assigned to hashibot, a community member has claimed the issue already.

E

Terraform Version

0.12.25

Affected Resource(s)

google_bigquery_dataset

Terraform Configuration Files

provider "google" {
  version = "3.22.0"
  project = var.gke_project
  region  = var.gke_region
}

provider "google-beta" {
  version = "3.22.0"
  project = var.gke_project
  region  = var.gke_region
}

resource "google_compute_subnetwork" "subnetwork-ip-alias" {
  name          = "${var.cluster_name}-subnet"
  region        = var.gke_region
  network       = var.vpc_self_link
  ip_cidr_range = var.ipv4_main_range

  secondary_ip_range {
    range_name    = var.ipv4_pods_range_name
    ip_cidr_range = var.ipv4_pods_range
  }

  secondary_ip_range {
    range_name    = var.ipv4_services_range_name
    ip_cidr_range = var.ipv4_services_range
  }
}

resource "google_bigquery_dataset" "dataset" {
  dataset_id    = replace("gke_usage_${var.cluster_name}", "-", "_")
  friendly_name = "gke-usage-${var.cluster_name}"
  description   = "GKE usage - ${var.cluster_name}"
  location      = "EU"
  project       = var.gke_project

  labels = {
    env     = var.env
    cluster = var.cluster_name
  }

  access {
    role          = "OWNER"
    special_group = "projectOwners"
  }
  access {
    role          = "READER"
    special_group = "projectReaders"
  }
  access {
    role          = "WRITER"
    special_group = "projectWriters"
  }
}

resource "google_container_cluster" "gke_cluster" {
  name                    = var.cluster_name
  description             = var.cluster_description
  location                = var.gke_region
  min_master_version      = var.gke_version
  node_version            = var.gke_version
  enable_kubernetes_alpha = "false"
  provider                = google-beta

  # Remove default node pool
  remove_default_node_pool = true
  initial_node_count       = 1

  # Network to which the cluster is connected
  network    = var.vpc_self_link
  subnetwork = google_compute_subnetwork.subnetwork-ip-alias.name

  master_auth {
    username = ""
    password = ""

    client_certificate_config {
      issue_client_certificate = false
    }
  }

  cluster_autoscaling {
    enabled = true

    auto_provisioning_defaults {
      oauth_scopes = var.node_autoprovisioning_oath_scopes
    }

    resource_limits {
      resource_type = "cpu"
      minimum       = var.node_autoprovisioning_settings.min_cpu
      maximum       = var.node_autoprovisioning_settings.max_cpu
    }

    resource_limits {
      resource_type = "memory"
      minimum       = var.node_autoprovisioning_settings.min_mem
      maximum       = var.node_autoprovisioning_settings.max_mem
    }
  }


  maintenance_policy {
    recurring_window {
      start_time = var.default_maintenance_policy_recurring_window.start_time
      end_time   = var.default_maintenance_policy_recurring_window.end_time
      recurrence = var.default_maintenance_policy_recurring_window.recurrence
    }
  }

  ip_allocation_policy {
    cluster_secondary_range_name  = var.ipv4_pods_range_name
    services_secondary_range_name = var.ipv4_services_range_name
  }

  resource_labels = {
    team        = "mls"
    type        = "compute"
    environment = var.env
  }

  vertical_pod_autoscaling {
    enabled = false
  }

  addons_config {
    dns_cache_config {
      enabled = true
    }
  }

  resource_usage_export_config {
    enable_network_egress_metering       = false
    enable_resource_consumption_metering = true

    bigquery_destination {
      dataset_id = google_bigquery_dataset.dataset.dataset_id
    }
  }
}

Debug Output

https://gist.github.com/thecodeassassin/f9e0c436100cefb2028eee96ab9faf18

Panic Output

Crash log doesn't exist.

Expected Behavior

Plan should be created successfully.

Actual Behavior

Terraform exists, we found this error:

module..google_bigquery_dataset.dataset: Refreshing state... [id=projects/***/datasets/gke_usage_europe_west] 2020-06-03T15:46:30.339Z [DEBUG] plugin.terraform-provider-google_v3.24.0_x5: panic: Error reading level state: strconv.ParseInt: parsing "1591104108820": value out of range

resulting in:

Error: rpc error: code = Canceled desc = context canceled

Steps to Reproduce

  1. terraform plan

Important Factoids

Things were working fine before. This suddenly stopped working, none of our code eve changd, the entire pipeline just died.

References

  • #0000

b/304968076

rileykarson avatar Mar 05 '24 18:03 rileykarson