terraform-provider-kubernetes
terraform-provider-kubernetes copied to clipboard
data.kubernetes_ingress does not return status even when the Ingress is fully deployed
Terraform Version, Provider Version and Kubernetes Version
Terraform version: v1.5.x
Kubernetes provider version: 2.22.0
Kubernetes version: 1.27.x
Affected Resource(s)
data.kubernetes_ingress
Terraform Configuration Files
resource "helm_release" "open_webui" {
name = "open-webui"
repository = "https://helm.openwebui.com/"
chart = "open-webui"
# version = var.webui_chart_version # Optional: pin the chart version if needed
wait = true
namespace = "open-webui"
create_namespace = true
# Include the Helm values file
values = [
file("helm/webui_values.yaml") # Ensures relative path resolution
]
# Set timeout to 15 minutes
timeout = 900
}
###############
# check ingress
###############
# resource "time_sleep" "wait_for_ingress" {
# depends_on = [helm_release.open_webui]
# create_duration = "120s" # Adjust as necessary for your environment
# }
resource "null_resource" "wait_for_ingress" {
triggers = {
timestamp = timestamp()
}
depends_on = [helm_release.open_webui]
}
# Get the load balancer DNS
data "kubernetes_ingress" "openwebui_ingress" {
metadata {
name = "open-webui"
namespace = "open-webui"
}
depends_on = [null_resource.wait_for_ingress]
}
output "loadbalancer_dns" {
value = {
hostname = data.kubernetes_ingress.openwebui_ingress.status.0.load_balancer.0.ingress.0.hostname
ip = data.kubernetes_ingress.openwebui_ingress.status.0.load_balancer.0.ingress.0.ip
}
}
resource "kubectl_manifest" "patched_openwebui_ingress" {
yaml_body = <<EOT
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: open-webui
namespace: open-webui
annotations:
kubernetes.io/ingress.class: "traefik"
cert-manager.io/cluster-issuer: "self-signed-cluster-issuer"
spec:
rules:
- host: "${data.kubernetes_ingress.openwebui_ingress.status.0.load_balancer.0.ingress.0.hostname}"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: open-webui
port:
number: 8080
tls:
- hosts:
- "${data.kubernetes_ingress.openwebui_ingress.status.0.load_balancer.0.ingress.0.hostname}"
secretName: openwebui-tls-secret
EOT
depends_on = [data.kubernetes_ingress.openwebui_ingress, kubectl_manifest.openwebui_certificate]
}`
Debug Output
Gist.--> ingress_datasource_deploy_failure.log
Panic Output
N/A
Steps to Reproduce
Expected Behavior
The data.kubernetes_ingress resource returns null for the status field, even though the Kubernetes Ingress resource is fully deployed and the kubectl command confirms the status.loadBalancer.ingress field is populated.
Actual Behavior
What actually happened?
helm_release.open_webui: Still creating... [2m30s elapsed]
helm_release.open_webui: Still creating... [2m40s elapsed]
helm_release.open_webui: Creation complete after 2m43s [id=open-webui]
null_resource.wait_for_ingress: Creating...
null_resource.wait_for_ingress: Creation complete after 0s [id=9110468478181892351]
data.kubernetes_ingress.openwebui_ingress: Reading...
data.kubernetes_ingress.openwebui_ingress: Read complete after 0s
╷
│ Error: Attempt to index null value
│
│ on helm_cert_manager.tf line 79, in resource "kubectl_manifest" "openwebui_certificate":
│ 79: - "${data.kubernetes_ingress.openwebui_ingress.status.0.load_balancer.0.ingress.0.hostname}"
│ ├────────────────
│ │ data.kubernetes_ingress.openwebui_ingress.status is null
│
│ This value is null, so it does not have any indices.
╵
╷
│ Error: Attempt to index null value
│
│ on helm_ollama_webui_notls.tf line 79, in output "loadbalancer_dns":
│ 79: hostname = data.kubernetes_ingress.openwebui_ingress.status.0.load_balancer.0.ingress.0.hostname
│ ├────────────────
│ │ data.kubernetes_ingress.openwebui_ingress.status is null
│
│ This value is null, so it does not have any indices.
╵
Important Factoids
The kubectl command verifies that the Ingress status.loadBalancer.ingress field is populated correctly:
kubectl get ingress open-webui -n open-webui -o yaml
Example Output:
status:
loadBalancer:
ingress:
- hostname: valid-hostname.lb.civo.com
ip: 192.0.2.1
Introducing delays (e.g., time_sleep) or dependencies on managed resources (e.g., helm_release) does not resolve the issue. The cluster is deployed in Civo Kubernetes, but the issue appears unrelated to the cluster provider.