terraform-provider-bigip
                                
                                
                                
                                    terraform-provider-bigip copied to clipboard
                            
                            
                            
                        We are facing error while creating the Pools in Local Traffic component of WAF using Terraform Code. Error is "Error: Failure adding node xxxxx to pool /Common/xxxxx: HTTP 400 :: {"code":400,"message":"01070734:3: Configuration error: Node (/Common/xxxxx) already exists as FQDN (xxxxx)","errorStack":[],"apiError":3}"
Environment
- TMOS/Bigip Version:
 - Terraform Version:
 - Terraform bigip provider Version:
 
Summary
A clear and concise description of what the bug is. Please also include information about the reproducibility and the severity/impact of the issue.
Steps To Reproduce
Steps to reproduce the behavior:
- 
Provide terraform resource config which you are facing trouble along with the output of it.
 - 
To get to know more about the issue, provide terraform debug logs
 - 
To capture debug logs, export TF_LOG variable with debug ( export TF_LOG= DEBUG ) before runnning terraform apply/plan
 - 
As3/DO json along with the resource config( for AS3/DO resource issues )
 
Expected Behavior
A clear and concise description of what you expected to happen.
Actual Behavior
A clear and concise description of what actually happens. Please include any applicable error output.
@rakotkar0608 can you please share terraform config for reproduction of issue.
Looping Suraj..
Best Regards, Rahul Keshaorao Akotkar Lead Systems Platform Engineer
Mastercard @.***<www.mastercard.com>
From: RavinderReddyF5 @.> Sent: 06 September 2022 00:32 To: F5Networks/terraform-provider-bigip @.> Cc: Akotkar, Rahul @.>; Mention @.> Subject: {EXTERNAL} Re: [F5Networks/terraform-provider-bigip] We are facing error while creating the Pools in Local Traffic component of WAF using Terraform Code. Error is "Error: Failure adding node xxxxx to pool /Common/xxxxx: HTTP 400 :: {"code":400,"messag...
CAUTION: The message originated from an EXTERNAL SOURCE. Please use caution when opening attachments, clicking links or responding to this email.
@rakotkar0608https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_rakotkar0608&d=DwMCaQ&c=uc5ZRXl8dGLM1RMQwf7xTCjRqXF0jmCF6SP0bDlmMmY&r=FBjUSsKMF9jv_VpVxbuVOEhBb3qX6QJLfGWvk1fw5P0&m=xkKoG8Aq0gu6ciBsE3oySV23SUhksweTXpRVauxARuEewP7dwZ5P3EUKFT2Dg626&s=Imi9KYRHXqyjyOJXldwEHJ4l2byEfubiD9FD9qRPkOA&e= can you please share terraform config for reproduction of issue.
— Reply to this email directly, view it on GitHubhttps://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_F5Networks_terraform-2Dprovider-2Dbigip_issues_678-23issuecomment-2D1237395057&d=DwMCaQ&c=uc5ZRXl8dGLM1RMQwf7xTCjRqXF0jmCF6SP0bDlmMmY&r=FBjUSsKMF9jv_VpVxbuVOEhBb3qX6QJLfGWvk1fw5P0&m=xkKoG8Aq0gu6ciBsE3oySV23SUhksweTXpRVauxARuEewP7dwZ5P3EUKFT2Dg626&s=MbskIrk-xBL1JF38lJVZgrsFkXyYzJXOrCUDVuysP7U&e=, or unsubscribehttps://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_notifications_unsubscribe-2Dauth_A2PF6MUSMT6K3X66KVT7HVLV4Y7TDANCNFSM6AAAAAAQDDRKGY&d=DwMCaQ&c=uc5ZRXl8dGLM1RMQwf7xTCjRqXF0jmCF6SP0bDlmMmY&r=FBjUSsKMF9jv_VpVxbuVOEhBb3qX6QJLfGWvk1fw5P0&m=xkKoG8Aq0gu6ciBsE3oySV23SUhksweTXpRVauxARuEewP7dwZ5P3EUKFT2Dg626&s=kmytrBNGZI2WJ06MOiiUjkqEzi6jTq6MSipmPA5BRAI&e=. You are receiving this because you were mentioned.Message ID: @.@.>>
CONFIDENTIALITY NOTICE This e-mail message and any attachments are only for the use of the intended recipient and may contain information that is privileged, confidential or exempt from disclosure under applicable law. If you are not the intended recipient, any disclosure, distribution or other use of this e-mail message or attachments is prohibited. If you have received this e-mail message in error, please delete and notify the sender immediately. Thank you.
Hi Ravinder,
Please see the code below which we have written in terraform(we are still new to this)
Background: As far as our understanding is "Pools" Depends on "Nodes" and "Nodes" Depends on "Monitors"(Please correct us if required)
Problem Description:
- we are creating Monitors 1st, 2nd Nodes and 3rd Pools, till here everything works perfectly, when we are trying to add Members in pool we are getting error, but not for all pools. For example, I have around 12 pools in which for 5 pools members are getting added perfectly, but for rest 7 groups we are getting below error.
 
Error: Failure adding node XXXX-XXXX-XXXX-XXXX:443 to pool /Common/XXXX-XXXX-XXXX-XXXX-XXXX: HTTP 400 :: {"code":400,"message":"01070734:3: Configuration error: Node (/Common/XXXX-XXXX-XXXX-XXXX) already exists as FQDN (XXXX-XXXX-XXXX-XXXX.XXXX.XXXX.XXXX)","errorStack":[],"apiError":3}
Please suggest, and thank you for replying us. waiting for your next response.
******************************************************************************************************************
Resource block to create Pools [Path: Local Traffic -> Pools]
******************************************************************************************************************
We have created yml files which contains values for each attribute required for creating the resource
locals { pools_files = fileset("./", "XXXXXXXX/pools/*.yml") pools = { for pool_file in local.pools_files : pool_file => yamldecode(file(pool_file)) } }
resource "bigip_ltm_pool" "pool" { provider = bigip.device1
for_each = local.pools name = each.value.name load_balancing_mode = each.value.load_balancing_mode minimum_active_members = each.value.minimum_active_members monitors = [each.value.monitors]
depends_on = [ bigip_ltm_node.node ] }
resource "time_sleep" "wait_30_seconds" { create_duration = "30s"
depends_on = [ bigip_ltm_pool.pool ] }
resource "bigip_ltm_pool_attachment" "attach_node" { provider = bigip.device1 for_each = local.pools pool = each.value.name node = each.value.node
depends_on = [ time_sleep.wait_30_seconds ] }
@surajmahajan2010 are you trying to add same nodes/members to different pools ?
Hi @RavinderReddyF5 , yes kind of, the node names are different BUT the FQDN name is same for new nodes.
@RavinderReddyF5 adding more information,
I am trying to replicate our existing WAF server but that was configured manually, and this I am trying to do by automation using Terraform. When I add the nodes in pool manually it does not give any error but when I try to do it using Terraform it is giving me error.
Hi @surajmahajan2010,
I did a test with the following config with no error.
main.tf
locals {
	pools_files = fileset("./", "pools/*.yml")
	pools = { for pool_file in local.pools_files : pool_file => yamldecode(file(pool_file)) 
}
}
resource "bigip_ltm_pool" "pool" {
	for_each = local.pools
		name = each.value.name
		load_balancing_mode = each.value.load_balancing_mode
		minimum_active_members = each.value.minimum_active_members
		monitors = [each.value.monitors]
}
resource "time_sleep" "wait_30_seconds" {
	create_duration = "30s"
	depends_on = [
		bigip_ltm_pool.pool
	]
}
resource "bigip_ltm_pool_attachment" "attach_node" {
	for_each = local.pools
		pool = each.value.name
		node = each.value.node
	depends_on = [
		time_sleep.wait_30_seconds
	]
}
pool1.yml
name: "/Common/terraform-pool1"
load_balancing_mode: "round-robin"
description: "Test-Pool"
monitors: "/Common/tcp"
minimum_active_members: 1
allow_snat: "yes"
allow_nat: "yes"
node: "192.168.30.1:80"
pool2.yml
name: "/Common/terraform-pool2"
load_balancing_mode: "round-robin"
description: "Test-Pool2"
monitors: "/Common/http"
minimum_active_members: 2
allow_snat: "yes"
allow_nat: "yes"
node: "192.168.50.1:80"
$ terraform plan -out test-tmp
Terraform used the selected providers to generate the following execution plan. Resource
actions are indicated with the following symbols:
  + create
Terraform will perform the following actions:
  # bigip_ltm_pool.pool["pools/pool1.yml"] will be created
  + resource "bigip_ltm_pool" "pool" {
      + allow_nat              = (known after apply)
      + allow_snat             = (known after apply)
      + id                     = (known after apply)
      + load_balancing_mode    = "round-robin"
      + minimum_active_members = 1
      + monitors               = [
          + "/Common/tcp",
        ]
      + name                   = "/Common/terraform-pool1"
      + reselect_tries         = (known after apply)
      + service_down_action    = (known after apply)
      + slow_ramp_time         = (known after apply)
    }
  # bigip_ltm_pool.pool["pools/pool2.yml"] will be created
  + resource "bigip_ltm_pool" "pool" {
      + allow_nat              = (known after apply)
      + allow_snat             = (known after apply)
      + id                     = (known after apply)
      + load_balancing_mode    = "round-robin"
      + minimum_active_members = 2
      + monitors               = [
          + "/Common/http",
        ]
      + name                   = "/Common/terraform-pool2"
      + reselect_tries         = (known after apply)
      + service_down_action    = (known after apply)
      + slow_ramp_time         = (known after apply)
    }
  # bigip_ltm_pool_attachment.attach_node["pools/pool1.yml"] will be created
  + resource "bigip_ltm_pool_attachment" "attach_node" {
      + connection_limit      = (known after apply)
      + connection_rate_limit = (known after apply)
      + dynamic_ratio         = (known after apply)
      + id                    = (known after apply)
      + node                  = "192.168.30.1:80"
      + pool                  = "/Common/terraform-pool1"
      + priority_group        = (known after apply)
      + ratio                 = (known after apply)
    }
  # bigip_ltm_pool_attachment.attach_node["pools/pool2.yml"] will be created
  + resource "bigip_ltm_pool_attachment" "attach_node" {
      + connection_limit      = (known after apply)
      + connection_rate_limit = (known after apply)
      + dynamic_ratio         = (known after apply)
      + id                    = (known after apply)
      + node                  = "192.168.50.1:80"
      + pool                  = "/Common/terraform-pool2"
      + priority_group        = (known after apply)
      + ratio                 = (known after apply)
    }
  # time_sleep.wait_30_seconds will be created
  + resource "time_sleep" "wait_30_seconds" {
      + create_duration = "30s"
      + id              = (known after apply)
    }
Plan: 5 to add, 0 to change, 0 to destroy.
───────────────────────────────────────────────────────────────────────────────────────────────
Saved the plan to: test-tmp
To perform exactly these actions, run the following command to apply:
    terraform apply "test-tmp"
$ terraform apply "test-tmp"
bigip_ltm_pool.pool["pools/pool2.yml"]: Creating...
bigip_ltm_pool.pool["pools/pool1.yml"]: Creating...
bigip_ltm_pool.pool["pools/pool2.yml"]: Creation complete after 0s [id=/Common/terraform-pool2]
bigip_ltm_pool.pool["pools/pool1.yml"]: Creation complete after 0s [id=/Common/terraform-pool1]
time_sleep.wait_30_seconds: Creating...
time_sleep.wait_30_seconds: Still creating... [10s elapsed]
time_sleep.wait_30_seconds: Still creating... [20s elapsed]
time_sleep.wait_30_seconds: Still creating... [30s elapsed]
time_sleep.wait_30_seconds: Creation complete after 30s [id=2022-09-22T15:10:13Z]
bigip_ltm_pool_attachment.attach_node["pools/pool1.yml"]: Creating...
bigip_ltm_pool_attachment.attach_node["pools/pool2.yml"]: Creating...
bigip_ltm_pool_attachment.attach_node["pools/pool2.yml"]: Creation complete after 1s [id=/Common/terraform-pool2]
bigip_ltm_pool_attachment.attach_node["pools/pool1.yml"]: Creation complete after 1s [id=/Common/terraform-pool1]
Apply complete! Resources: 5 added, 0 changed, 0 destroyed.
But I got the same error when trying to create again a node using the bigip_ltm_node resources. So I think you are trying to create a node (or recreate) but he already exists.
Thank you all for helping on the issue, closing the issue from our end as the issue got resolved.
it is resolved