terraform-provider-kubernetes
terraform-provider-kubernetes copied to clipboard
kubernetes_horizontal_pod_autoscaler_v2 produces a wrong manifest with pod metrics
I want to import my existing autoscaler configuration into terraform so I started to work with kubernetes_horizontal_pod_autoscaler_v2.hpa to replicate manifest I built some time ago without terraform.
However I'm not able to create this manifest with it (the manifest is valid and already applied into my kubernetes cluster and it's working as intended.
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: sidekiq
namespace: test
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: test
minReplicas: 1
maxReplicas: 3
metrics:
- type: Pods
pods:
metric:
name: sidekiq_queue_latency # How long a job has been waiting in the queue
target:
type: Value
averageValue: "20" # Keep it under 20 seconds
- type: Pods
pods:
metric:
name: sidekiq_jobs_waiting_count # How many jobs are waiting to be processed
target:
type: Value
averageValue: "10" # Keep it under 10 jobs
Terraform Version, Provider Version and Kubernetes Version
Terraform v1.9.6
on darwin_arm64
+ provider registry.terraform.io/hashicorp/kubernetes v2.35.0
Affected Resource(s)
- kubernetes_horizontal_pod_autoscaler_v2
Terraform Configuration Files
terraform {
required_providers {
kubernetes = {
source = "hashicorp/kubernetes"
version = ">= 2.35.0"
}
}
}
provider "kubernetes" {
config_path = "~/.kube/config"
config_context = "kind-kind"
}
resource "kubernetes_namespace_v1" "test" {
metadata {
name = "test"
}
}
resource "kubernetes_deployment_v1" "deployment" {
metadata {
name = "test"
namespace = kubernetes_namespace_v1.test.metadata[0].name
labels = { app = "test" }
}
spec {
replicas = 1
selector {
match_labels = { app = "test" }
}
template {
metadata {
labels = { app = "test" }
}
spec {
container {
image = "nginx"
name = "nginx"
}
}
}
}
}
resource "kubernetes_horizontal_pod_autoscaler_v2" "hpa" {
metadata {
name = "test-hpa"
namespace = kubernetes_namespace_v1.test.metadata[0].name
}
spec {
scale_target_ref {
api_version = "apps/v1"
kind = "Deployment"
name = kubernetes_deployment_v1.deployment.metadata[0].name
}
# If not given defaults to 1
min_replicas = 1
max_replicas = 3
metric {
type = "Pods"
pods {
metric {
name = "puma_backlog"
}
target {
type = "Value"
average_value = "1"
}
}
}
}
}
import {
id = "test/sidekiq"
to = kubernetes_horizontal_pod_autoscaler_v2.sidekiq
}
resource "kubernetes_horizontal_pod_autoscaler_v2" "sidekiq" {
metadata {
name = "sidekiq"
namespace = kubernetes_namespace_v1.test.metadata[0].name
}
spec {
scale_target_ref {
api_version = "apps/v1"
kind = "Deployment"
name = kubernetes_deployment_v1.deployment.metadata[0].name
}
# If not given defaults to 1
min_replicas = 1
max_replicas = 3
metric {
type = "Pods"
pods {
metric {
name = "sidekiq_queue_latency"
}
target {
type = "Value"
average_value = "20"
}
}
}
metric {
type = "Pods"
pods {
metric {
name = "sidekiq_jobs_waiting_count"
}
target {
type = "Value"
average_value = "10"
}
}
}
}
}
Steps to Reproduce
terraform apply -target kubernetes_horizontal_pod_autoscaler_v2.hpa
Expected Behavior
It should have created a namespace, a deployment and a hpa configuration similar to this:
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: sidekiq
namespace: test
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: test
minReplicas: 1
maxReplicas: 3
metrics:
- type: Pods
pods:
metric:
name: puma_backlog
target:
type: Value
averageValue: "1"
Actual Behavior
Namespace and deployment are created, HPA creation fails with this error. It seems that averageValue is not passed.
╷
│ Error: HorizontalPodAutoscaler.autoscaling "test-hpa" is invalid: spec.metrics[0].pods.target.averageValue: Required value: must specify a positive target averageValue
│
│ with kubernetes_horizontal_pod_autoscaler_v2.hpa,
│ on hpa.tf line 46, in resource "kubernetes_horizontal_pod_autoscaler_v2" "hpa":
│ 46: resource "kubernetes_horizontal_pod_autoscaler_v2" "hpa" {
│
╵
Error on import
If I try to import my existing manifest (the one at the beginning of the issue) something strange happens:
terraform apply -target kubernetes_deployment_v1.deployment -auto-approve
kubernetes apply -f hpa.yaml # Namespace must exist
terraform plan -target kubernetes_horizontal_pod_autoscaler_v2.sidekiq # to see the diff
I expect the previous plan to show no diffs, however it shows 2 strange diffs
kubernetes_namespace_v1.test: Refreshing state... [id=test]
kubernetes_deployment_v1.deployment: Refreshing state... [id=test/test]
kubernetes_horizontal_pod_autoscaler_v2.sidekiq: Preparing import... [id=test/sidekiq]
kubernetes_horizontal_pod_autoscaler_v2.sidekiq: Refreshing state... [id=test/sidekiq]
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
# kubernetes_horizontal_pod_autoscaler_v2.sidekiq will be updated in-place
# (imported from "test/sidekiq")
~ resource "kubernetes_horizontal_pod_autoscaler_v2" "sidekiq" {
id = "test/sidekiq"
metadata {
annotations = {}
generate_name = null
generation = 0
labels = {}
name = "sidekiq"
namespace = "test"
resource_version = "597615"
uid = "4b4de789-3ba9-4056-9bd2-0531fb69ffcf"
}
~ spec {
max_replicas = 3
min_replicas = 1
target_cpu_utilization_percentage = 0
~ metric {
type = "Pods"
~ pods {
metric {
name = "sidekiq_queue_latency"
}
~ target {
average_utilization = 0
+ average_value = "20"
type = "Value"
- value = "<nil>" -> null
}
}
}
~ metric {
type = "Pods"
~ pods {
metric {
name = "sidekiq_jobs_waiting_count"
}
~ target {
average_utilization = 0
+ average_value = "10"
type = "Value"
- value = "<nil>" -> null
}
}
}
scale_target_ref {
api_version = "apps/v1"
kind = "Deployment"
name = "test"
}
}
}
Plan: 1 to import, 0 to add, 1 to change, 0 to destroy.
╷
│ Warning: Resource targeting is in effect
│
│ You are creating a plan with the -target option, which means that the result of this plan may not represent all of the changes requested by the current configuration.
│
│ The -target option is not for routine use, and is provided only for exceptional situations such as recovering from errors or mistakes, or when Terraform specifically suggests to use it as part of an error message.
╵
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly these actions if you run "terraform apply" now.
If I try to apply I get the same error as the previous apply
╷
│ Error: Failed to update horizontal pod autoscaler: HorizontalPodAutoscaler.autoscaling "sidekiq" is invalid: [spec.metrics[0].pods.target.averageValue: Required value: must specify a positive target averageValue, spec.metrics[1].pods.target.averageValue: Required value: must specify a positive target averageValue]
│
│ with kubernetes_horizontal_pod_autoscaler_v2.sidekiq,
│ on hpa.tf line 81, in resource "kubernetes_horizontal_pod_autoscaler_v2" "sidekiq":
│ 81: resource "kubernetes_horizontal_pod_autoscaler_v2" "sidekiq" {
│
Hi @fabn,
Thank you for reporting this issue. For some metric types, averageValue should be set regardless of the target type. We haven't taken this into account in the provider logic. I think we should be able to fix this soon.
Thanks!
Hi @arybolovlev any news about this?