F5 Modules are taking very long time to execute a task - approx 2137 seconds.
COMPONENT NAME
bigip_device_info
Environment
ANSIBLE VERSION
ansible [core 2.15.10]
config file = /Users/[email protected]/Documents/002CaseStudies/001ReposGIT/001AlaskaAirlines/019_f5_dna_dnw_resiliency/ansible.cfg
configured module search path = ['/Users/[email protected]/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/[email protected]/Documents/002CaseStudies/001ReposGIT/001AlaskaAirlines/019_f5_dna_dnw_resiliency/lib/python3.9/site-packages/ansible
ansible collection location = /Users/[email protected]/Documents/002CaseStudies/001ReposGIT/001AlaskaAirlines/019_f5_dna_dnw_resiliency/collections/ansible_collections
executable location = /Users/[email protected]/Documents/002CaseStudies/001ReposGIT/001AlaskaAirlines/019_f5_dna_dnw_resiliency/bin/ansible
python version = 3.9.6 (v3.9.6:db3ff76da1, Jun 28 2021, 11:14:58) [Clang 12.0.5 (clang-1205.0.22.9)] (/Users/[email protected]/Documents/002CaseStudies/001ReposGIT/001AlaskaAirlines/019_f5_dna_dnw_resiliency/bin/python3)
jinja version = 3.1.3
libyaml = True
BIGIP VERSION
Sys::Version
Main Package
Product BIG-IP
Version 15.1.8.1
Build 0.0.3
Edition Point Release 1
Date Fri Dec 16 05:41:59 PST 2022
CONFIGURATION
[defaults]
inventory = ./inventory.yaml
callbacks_enabled = profile_tasks
collections_paths=./collections/ansible_collections/
roles_path=./roles/
gathering = explicit
retry_files_enabled = False
filter_plugins = ./plugins/filters/
host_key_checking = False
look_for_keys = False
host_key_auto_add = True
OS / ENVIRONMENT
ProductName: macOS
ProductVersion: 12.1
BuildVersion: 21C52
SUMMARY
My intent here is to use the "bigip_device_info" module to gather a list of subset info, so I can iterate over the current config data and disable some / all virtual servers as required.
When I run the playbook, it takes ages to just gather the subset data. The last execution I tried took almost 2137 seconds to complete.
STEPS TO REPRODUCE
---
- name: Disable All or Selected LTM VIPs
hosts: prod_f5
connection: local
gather_facts: no
# validate_certs: no
collections:
- f5networks.f5_modules
tasks:
- name: "000_Task01 - Gathering BIG-IP VIPs & Pools Info"
f5networks.f5_modules.bigip_device_info:
provider: "{{ conn_details }}"
gather_subset:
- ltm-pools
- virtual-servers
- irules
register: ltm_active_pools
# vars:
# ansible_command_timeout: 900
- name: "010_Task01 - Setting up VIPs & Pools Facts"
set_fact:
vip_details: "{{ ltm_active_pools | VipsPoolsMembersState }}"
delegate_to: localhost
# Filter path: ./plugins/filters/f5_ltm_allVirtualSrversPoolsAndMembers_filter.py
- name: "015_Task02 - Backing up the Current State of ALL VIPs & Pools State"
copy:
content: "{{ vip_details | to_nice_json }}"
dest: "./datastore/{{ inventory_hostname }}_LTMvirtualServersStateBackup.json"
EXPECTED RESULTS
I should have received the standard "Return Values" of all the respective subset in maximum 1-2 mins. That's my expection with automation.
ACTUAL RESULTS
The script execution is taking 2000+ seconds to execute a task. Here is the time output of last execution:
000_Task01 - Gathering BIG-IP VIPs & Pools Info ---------------------- 2137.28s
ansible-playbook [core 2.15.10]
config file = /Users/sri/Documents/002CaseStudies/001ReposGIT/001AAG/019_f5_dna_dnw_resiliency/ansible.cfg
configured module search path = ['/Users/sri/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/sri/Documents/002CaseStudies/001ReposGIT/001AAG/019_f5_dna_dnw_resiliency/lib/python3.9/site-packages/ansible
ansible collection location = /Users/sri/Documents/002CaseStudies/001ReposGIT/001AAG/019_f5_dna_dnw_resiliency/collections/ansible_collections
executable location = /Users/sri/Documents/002CaseStudies/001ReposGIT/001AAG/019_f5_dna_dnw_resiliency/bin/ansible-playbook
python version = 3.9.6 (v3.9.6:db3ff76da1, Jun 28 2021, 11:14:58) [Clang 12.0.5 (clang-1205.0.22.9)] (/Users/sri/Documents/002CaseStudies/001ReposGIT/001AAG/019_f5_dna_dnw_resiliency/bin/python3)
jinja version = 3.1.3
libyaml = True
Using /Users/sri/Documents/002CaseStudies/001ReposGIT/001AAG/019_f5_dna_dnw_resiliency/ansible.cfg as config file
host_list declined parsing /Users/sri/Documents/002CaseStudies/001ReposGIT/001AAG/019_f5_dna_dnw_resiliency/inventory.yaml as it did not pass its verify_file() method
script declined parsing /Users/sri/Documents/002CaseStudies/001ReposGIT/001AAG/019_f5_dna_dnw_resiliency/inventory.yaml as it did not pass its verify_file() method
Parsed /Users/sri/Documents/002CaseStudies/001ReposGIT/001AAG/019_f5_dna_dnw_resiliency/inventory.yaml inventory source with ini plugin
redirecting (type: callback) ansible.builtin.profile_tasks to ansible.posix.profile_tasks
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
PLAYBOOK: collect_f5_facts.yaml ************************************************
1 plays in collect_f5_facts.yaml
PLAY [Disable All or Selected LTM VIPs] ****************************************
TASK [000_Task01 - Gathering BIG-IP VIPs & Pools Info] *************************
task path: /Users/sri/Documents/002CaseStudies/001ReposGIT/001AAG/019_f5_dna_dnw_resiliency/collect_f5_facts.yaml:11
Thursday 28 March 2024 01:24:57 -0700 (0:00:00.015) 0:00:00.015 ********
<bigip3b-mgmt> ESTABLISH LOCAL CONNECTION FOR USER: sri
<bigip3b-mgmt> EXEC /bin/sh -c 'echo '"'"'~sri'"'"' && sleep 0'
<bigip3b-mgmt> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo ~sri/.ansible/tmp `"&& mkdir "` echo ~sri/.ansible/tmp/ansible-tmp-1711614297.47084-16647-85093150767963 `" && echo ansible-tmp-1711614297.47084-16647-85093150767963="` echo ~sri/.ansible/tmp/ansible-tmp-1711614297.47084-16647-85093150767963 `" ) && sleep 0'
Using module file /Users/sri/Documents/002CaseStudies/001ReposGIT/001AAG/019_f5_dna_dnw_resiliency/lib/python3.9/site-packages/ansible_collections/f5networks/f5_modules/plugins/modules/bigip_device_info.py
<bigip3b-mgmt> PUT /Users/sri/.ansible/tmp/ansible-local-16640024c2bvu/tmpdxr9hgj7 TO /Users/sri/.ansible/tmp/ansible-tmp-1711614297.47084-16647-85093150767963/AnsiballZ_bigip_device_info.py
<bigip3b-mgmt> EXEC /bin/sh -c 'chmod u+x /Users/sri/.ansible/tmp/ansible-tmp-1711614297.47084-16647-85093150767963/ /Users/sri/.ansible/tmp/ansible-tmp-1711614297.47084-16647-85093150767963/AnsiballZ_bigip_device_info.py && sleep 0'
<bigip3b-mgmt> EXEC /bin/sh -c '/Users/sri/Documents/002CaseStudies/001ReposGIT/001AAG/019_f5_dna_dnw_resiliency/bin/python3 /Users/sri/.ansible/tmp/ansible-tmp-1711614297.47084-16647-85093150767963/AnsiballZ_bigip_device_info.py && sleep 0'
<bigip3b-mgmt> EXEC /bin/sh -c 'rm -f -r /Users/sri/.ansible/tmp/ansible-tmp-1711614297.47084-16647-85093150767963/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
File "/var/folders/1k/xk66d_jd0313dp29666tk_qh0000gq/T/ansible_f5networks.f5_modules.bigip_device_info_payload_t5jj8tqa/ansible_f5networks.f5_modules.bigip_device_info_payload.zip/ansible_collections/f5networks/f5_modules/plugins/modules/bigip_device_info.py", line 18659, in main
File "/var/folders/1k/xk66d_jd0313dp29666tk_qh0000gq/T/ansible_f5networks.f5_modules.bigip_device_info_payload_t5jj8tqa/ansible_f5networks.f5_modules.bigip_device_info_payload.zip/ansible_collections/f5networks/f5_modules/plugins/modules/bigip_device_info.py", line 18329, in exec_module
File "/var/folders/1k/xk66d_jd0313dp29666tk_qh0000gq/T/ansible_f5networks.f5_modules.bigip_device_info_payload_t5jj8tqa/ansible_f5networks.f5_modules.bigip_device_info_payload.zip/ansible_collections/f5networks/f5_modules/plugins/modules/bigip_device_info.py", line 18437, in execute_managers
File "/var/folders/1k/xk66d_jd0313dp29666tk_qh0000gq/T/ansible_f5networks.f5_modules.bigip_device_info_payload_t5jj8tqa/ansible_f5networks.f5_modules.bigip_device_info_payload.zip/ansible_collections/f5networks/f5_modules/plugins/modules/bigip_device_info.py", line 17818, in exec_module
File "/var/folders/1k/xk66d_jd0313dp29666tk_qh0000gq/T/ansible_f5networks.f5_modules.bigip_device_info_payload_t5jj8tqa/ansible_f5networks.f5_modules.bigip_device_info_payload.zip/ansible_collections/f5networks/f5_modules/plugins/modules/bigip_device_info.py", line 17826, in _exec_module
File "/var/folders/1k/xk66d_jd0313dp29666tk_qh0000gq/T/ansible_f5networks.f5_modules.bigip_device_info_payload_t5jj8tqa/ansible_f5networks.f5_modules.bigip_device_info_payload.zip/ansible_collections/f5networks/f5_modules/plugins/modules/bigip_device_info.py", line 7721, in to_return
File "/var/folders/1k/xk66d_jd0313dp29666tk_qh0000gq/T/ansible_f5networks.f5_modules.bigip_device_info_payload_t5jj8tqa/ansible_f5networks.f5_modules.bigip_device_info_payload.zip/ansible_collections/f5networks/f5_modules/plugins/modules/bigip_device_info.py", line 17396, in type
File "/var/folders/1k/xk66d_jd0313dp29666tk_qh0000gq/T/ansible_f5networks.f5_modules.bigip_device_info_payload_t5jj8tqa/ansible_f5networks.f5_modules.bigip_device_info_payload.zip/ansible_collections/f5networks/f5_modules/plugins/modules/bigip_device_info.py", line 17476, in has_fasthttp_profiles
File "/var/folders/1k/xk66d_jd0313dp29666tk_qh0000gq/T/ansible_f5networks.f5_modules.bigip_device_info_payload_t5jj8tqa/ansible_f5networks.f5_modules.bigip_device_info_payload.zip/ansible_collections/f5networks/f5_modules/plugins/modules/bigip_device_info.py", line 17547, in _read_current_fasthttp_profiles_from_device
fatal: [bigip3b-mgmt]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"data_increment": 10,
"gather_subset": [
"ltm-pools",
"virtual-servers",
"irules"
],
"partition": "Common",
"provider": {
"auth_provider": null,
"no_f5_teem": false,
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"server": "bigip3b-mgmt",
"server_port": 443,
"timeout": null,
"transport": "rest",
"user": "ltmansible",
"validate_certs": false
}
}
},
"msg": "Expecting value: line 1 column 1 (char 0)"
}
PLAY RECAP *********************************************************************
bigip3b-mgmt : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
Thursday 28 March 2024 02:00:34 -0700 (0:35:37.281) 0:35:37.297 ********
===============================================================================
000_Task01 - Gathering BIG-IP VIPs & Pools Info ---------------------- 2137.28s
/Users/sri/Documents/002CaseStudies/001ReposGIT/001AAG/019_f5_dna_dnw_resiliency/collect_f5_facts.yaml:11
Hi @asrivastav-aag,
How many VS, pools and irules are configured on the device you targeted?
@pgouband, thank you for looking into it. Here are the total counts. Virtual Servers - 164 Pools - 217 iRules - 180
@pgouband, thank you for looking into it. Here are the total counts. Virtual Servers - 164 Pools - 217 iRules - 180
We have almost 16 LTMs, and each have different number of vs, pools, & irules. I want to use the playbook for all the LTMs.
Hi @asrivastav-aag
Have you tried to split the task in 3? One for VS, one for pools and one for irules. Maybe it's the rendering the issue.
I haven't tried to split the code yet. However, I did try to use one subset at a time, and it had the same behavior, but with less time since it was only 1 subset at that time.
I will try to split the tasks today and confirm.
Hi @pgouband, I just ran only the "virtual-servers" subset; it took almost 20 mins to collect data from the LTM. Task Start time: 07:51:00 Task End time: 08:09:36
===============================================================================
000_Task00 - Gathering LTM VIPs Info --------------------------------- 1116.41s
/Users/sri/Documents/002CaseStudies/001ReposGIT/003aag-ITSRepos/006_f5_dna_dnw_resiliency/get_virtual_servers.yaml:2
000_Task01 - Gathering LTM Pools Info --------------------------------- 614.93s
/Users/sri/Documents/002CaseStudies/001ReposGIT/003aag-ITSRepos/006_f5_dna_dnw_resiliency/get_pools.yaml:2
000_Task03 - Gathering LTM iRules Info --------------------------------- 32.64s
/Users/sri/Documents/002CaseStudies/001ReposGIT/003aag-ITSRepos/006_f5_dna_dnw_resiliency/get_irules.yaml:2
Hi @asrivastav-aag,
When running the ansible playbook, have you checked the logs on the device and the CPU/memory utilization? Mgmt in resource provisionning is set to small? https://my.f5.com/manage/s/article/K26427018
This is what we have on the bigip.
Hi @asrivastav-aag,
Ok so resource provisioning on mgmt seems fine.
When running the ansible playbook, have you checked the logs on the device and the CPU/memory utilization?
Auth Logs
type=USER_AUTH msg=audit(1711638580.662:583718): pid=1948 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:initrc_t:s0 msg='op=PAM:authentication grantors=pam_ldap acct="ltmansible" exe="/usr/bin/pam-authenticator" hostname=10.21.116.34 addr=10.21.116.34 terminal=? res=success'
type=USER_ACCT msg=audit(1711638580.896:583719): pid=1948 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:initrc_t:s0 msg='op=PAM:accounting grantors=pam_ldap acct="ltmansible" exe="/usr/bin/pam-authenticator" hostname=10.21.116.34 addr=10.21.116.34 terminal=? res=success'
type=USER_AUTH msg=audit(1711638583.998:583720): pid=1957 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:initrc_t:s0 msg='op=PAM:authentication grantors=pam_ldap acct="ltmansible" exe="/usr/bin/pam-authenticator" hostname=10.21.116.34 addr=10.21.116.34 terminal=? res=success'
type=USER_ACCT msg=audit(1711638584.122:583721): pid=1957 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:initrc_t:s0 msg='op=PAM:accounting grantors=pam_ldap acct="ltmansible" exe="/usr/bin/pam-authenticator" hostname=10.21.116.34 addr=10.21.116.34 terminal=? res=success'
type=USER_AUTH msg=audit(1711638587.611:583722): pid=1965 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:initrc_t:s0 msg='op=PAM:authentication grantors=pam_ldap acct="ltmansible" exe="/usr/bin/pam-authenticator" hostname=10.21.116.34 addr=10.21.116.34 terminal=? res=success'
type=USER_ACCT msg=audit(1711638587.813:583723): pid=1965 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:initrc_t:s0 msg='op=PAM:accounting grantors=pam_ldap acct="ltmansible" exe="/usr/bin/pam-authenticator" hostname=10.21.116.34 addr=10.21.116.34 terminal=? res=success'
type=USER_AUTH msg=audit(1711639193.804:583906): pid=2369 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:initrc_t:s0 msg='op=PAM:authentication grantors=pam_ldap acct="ltmansible" exe="/usr/bin/pam-authenticator" hostname=10.21.116.34 addr=10.21.116.34 terminal=? res=success'
type=USER_ACCT msg=audit(1711639193.997:583907): pid=2369 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:initrc_t:s0 msg='op=PAM:accounting grantors=pam_ldap acct="ltmansible" exe="/usr/bin/pam-authenticator" hostname=10.21.116.34 addr=10.21.116.34 terminal=? res=success'
type=USER_AUTH msg=audit(1711639197.522:583908): pid=2378 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:initrc_t:s0 msg='op=PAM:authentication grantors=pam_ldap acct="ltmansible" exe="/usr/bin/pam-authenticator" hostname=10.21.116.34 addr=10.21.116.34 terminal=? res=success'
type=USER_ACCT msg=audit(1711639197.702:583909): pid=2378 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:initrc_t:s0 msg='op=PAM:accounting grantors=pam_ldap acct="ltmansible" exe="/usr/bin/pam-authenticator" hostname=10.21.116.34 addr=10.21.116.34 terminal=? res=success'
type=USER_AUTH msg=audit(1711639201.631:583964): pid=2416 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:initrc_t:s0 msg='op=PAM:authentication grantors=pam_ldap acct="ltmansible" exe="/usr/bin/pam-authenticator" hostname=10.21.116.34 addr=10.21.116.34 terminal=? res=success'
type=USER_ACCT msg=audit(1711639201.874:583967): pid=2416 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:initrc_t:s0 msg='op=PAM:accounting grantors=pam_ldap acct="ltmansible" exe="/usr/bin/pam-authenticator" hostname=10.21.116.34 addr=10.21.116.34 terminal=? res=success'
restjavad-audit Logs
Bigip Resource Utilization during the playbook run.
Let me know if need to run any debug or collect other specific logs from the F5.
Hi @asrivastav-aag,
First I wasn't asking you to share logs here. Second I think you should open a support case via https://my.f5.com/manage/s/
Sorry.. I thought I would share the logs here so if I missed any thing, you or someone will catch it. Specifically related to the URIs. :)
no problem. I'll have a look tomorrow if I have time.
Thank you so much! I will also get a ticket open with F5.
@asrivastav-aag bigip_device_info has 2 parameters you can finetune the amount of data it grabs: https://github.com/F5Networks/f5-ansible/blob/devel/ansible_collections/f5networks/f5_modules/plugins/modules/bigip_device_info.py#L28 https://github.com/F5Networks/f5-ansible/blob/devel/ansible_collections/f5networks/f5_modules/plugins/modules/bigip_device_info.py#L21
See if adjusting these values helps
@wojtek0806, this was a great suggestion!! Thank you so much!
Adjusting data_increment value has drastically decreased the runtime - from 2000+ seconds to almost 300 seconds.
1 subset per task
===============================================================================
000_Task00 - Gathering LTM VIPs Info ----------------------------------- 181.07s
000_Task01 - Gathering LTM Pools Info ----------------------------------- 94.79s
000_Task03 - Gathering LTM iRules Info ----------------------------------- 7.78s
debug -------------------------------------------------------------------- 3.08s
1 subset per task
===============================================================================
000_Task00 - Gathering LTM VIPs Info ---------------------------------- 178.57s
000_Task01 - Gathering LTM Pools Info ---------------------------------- 95.23s
000_Task03 - Gathering LTM iRules Info ---------------------------------- 7.88s
debug ------------------------------------------------------------------- 3.04s
4 sunsets in one tasks
===============================================================================
000_Task01 - Gathering BIG-IP VIPs, Pools, iRules & Device Info ------- 271.67s
015_Task02 - Backing up the Current State of ALL VIPs & Pools State ----- 1.03s
010_Task01 - Setting up VIPs & Pools Facts ------------------------------ 0.41s
4 sunsets in one tasks
===============================================================================
000_Task01 - Gathering BIG-IP VIPs, Pools, iRules & Device Info ------- 274.79s
010_Task01 - Setting up VIPs & Pools Facts ------------------------------ 0.33s
Hi @asrivastav-aag , I guess we good to close this issue.
Yes, we are good to close this issue, but was wondering is there any other options that I can use to reduce the time? 5 mins is still long time for a runbook. Thank you for all the support, everyone!
Hi @asrivastav-aag,
No sure there is more we can do. I'll close this issue