terraform-provider-aws
terraform-provider-aws copied to clipboard
aws_lb data source fails to identify LB by tag
Community Note
- Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
- Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
- If you are interested in working on this issue or have submitted a pull request, please leave a comment
Terraform CLI and Terraform AWS Provider Version
Terraform v1.0.3 on linux_amd64
- provider registry.terraform.io/hashicorp/aws v3.53.0
Affected Resource(s)
- aws_lb
Kubernetes service configuration:
We have a LoadBalancer provisioned in AWS, which is created by the aws lb controller. The service is configured like this:
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: lb_name=foo-bar-balancer
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "600"
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
service.beta.kubernetes.io/aws-load-balancer-type: nlb
name: ingress
spec:
<...>
status:
loadBalancer:
ingress:
- hostname: <>.elb.us-east-1.amazonaws.com
Terraform Configuration Files
data "aws_route53_zone" "dns_zone" {
name = var.hosted_zone
private_zone = false
}
data "aws_lb" "main" {
tags = {
lb_name = local.lb_name_tag
}
depends_on = [helm_release.ingress_gateway]
}
resource "aws_route53_record" "top_level" {
zone_id = data.aws_route53_zone.dns_zone.zone_id
name = var.hosted_zone
type = "A"
alias {
name = data.aws_lb.main.dns_name
zone_id = data.aws_lb.main.zone_id
evaluate_target_health = true
}
}
Expected Behavior
Terraform identifies the LoadBalancer based on the applied tag.
Actual Behavior
Terraform fails with the following error:
Error: Search returned 0 results, please revise so only one is returned
even though 2 lines above, the LoadBalancer id appears in the log:
data.aws_lb.main: Reading... [id=arn:aws:elasticloadbalancing:us-east-1:494495723522:loadbalancer/net/################################/################]
data.aws_route53_zone.dns_zone: Read complete after 2s [id=Z0209050IZW6889EK68H]
Steps to Reproduce
-
terraform apply
Additional notes:
This works with the v0.51.0 version of the aws provider. As a workaround, we fixed the version to this in the provider configuration.
I seem to have run into a similar situation as well. I am currently using data source with tags to list the information for one of my 4 application/network load balancers but it always return the full amount of my load balancers, which is 4. Even if I leave the tags empty, it still returns 4. I tried adding new tags, specifically to the one that I want to get as a data source, and it still returns me all of them. I am currently using terraform 0.12.29. Hopefully we can get a reply or something regarding this.
I could not reproduce the issue. This works fine for me using 3.58 version.
Same issue with provider v.3.63.0. Did a TF_LOG=debug and it looks like the response coming from AWS doesn't even contain the internal LBs. Had to do:
data "kubernetes_service" "mylb" {
metadata {
name = "my-lb-name"
namespace = "namespace"
}
}
data "aws_elb" "mylb" {
name = split("-", split(".", data.kubernetes_service.mylb.status.0.load_balancer.0.ingress.0.hostname).0).1
}
Whis is crap, but at least it works :\
Edit: only have this problem for internal LBs
@csabakollar I just ran into this issue - and though I agree with your assessment...at least you found something working. Thank you!
Basically everything work as intended for me until I had two LB's in the same environment - then my tags... though specific to an individual LB, would bring back all the LB's rather than filter by tag.
I can also reproduce v3.48.0
any updates on this?
Seems to be working on the new 4.31.0 version. The version 3.75.0 is not working for me.
Got the same issue with aws
provider v4.57.1 and kubernetes
provider 2.13.0
I am using a local helm_release
to deploy Kubernetes Ingress which with AWS LB controller
creates an Application Load balancer
.
Data query fails if I am using : data "aws_lb" or data "kubernetes_ingress_v1"
data "aws_alb" "internal" {
tags = {
"elbv2.k8s.aws/cluster" = local.local_prefix_with_env_suffix
}
depends_on = [
helm_release.eks_alb_ingress
]
}
data "kubernetes_ingress_v1" "example" {
metadata {
name = "services"
namespace = "istio-ingress"
}
depends_on = [ helm_release.eks_alb_ingress ]
}
output "test" {
value = <<EOF
${data.kubernetes_ingress_v1.example.status.0.load_balancer.0.ingress.0.hostname},
${data.aws_alb.internal.dns_name}
EOF
}
│ Error: Search returned 0 results, please revise so only one is returned
│
│ with data.aws_alb.internal,
│ Error: Invalid index
│
│ on output.tf line 39, in output "test":
│ 39: value = data.kubernetes_ingress_v1.example.status.0.load_balancer.0.ingress.0.hostname
Error seems to be caused by the fact that I am querying AWS/EKS right after ALB creation, where in fact it requires a delay before LB becomes visible. terraform apply
fails on the first run, but succeeds on the second.
Our workaround was to introduce a 15 second delay after creation of the Load balancer, before querying AWS or EKS, using time
provider and time_sleep
resource
terraform {
required_providers {
time = {
source = "hashicorp/time"
version = "0.9.1"
}
}
}
resource "time_sleep" "wait_15_seconds" {
create_duration = "15s"
depends_on = [helm_release.eks_alb_ingress]
}
data "aws_alb" "internal" {
tags = {
"elbv2.k8s.aws/cluster" = local.local_prefix_with_env_suffix
}
depends_on = [
time_sleep.wait_15_seconds
]
}
data "kubernetes_ingress_v1" "example" {
metadata {
name = "services"
namespace = "istio-ingress"
}
depends_on = [ time_sleep.wait_15_seconds ]
}
If possible, I think adding a timeout
or retry
option to aws_lb
data source would be great.