Allow downing of removed profiles in initscripts
on RHEL 6, when removing a bond using state=absent, it removes the config files but does not actually down the bond interface.
Subsequent attempts to down the bond via playbook fails as it states that there is no connection defined.
Needs the ability to modify a physical or virtual interface, regardless of a configuration or "connect profile" existing.
Results of playbook when when marking interfaces absent. Interface in question is "dbbond"
[root@tabserver ansible]# ansible-playbook -l util6vm net_demo_del.yml -vv ansible-playbook 2.5.1 config file = /etc/ansible/ansible.cfg configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.7/site-packages/ansible executable location = /bin/ansible-playbook python version = 2.7.5 (default, Feb 20 2018, 09:19:12) [GCC 4.8.5 20150623 (Red Hat 4.8.5-28)] Using /etc/ansible/ansible.cfg as config file
PLAYBOOK: net_demo_del.yml ************************************************************************************************************************** 1 plays in net_demo_del.yml
PLAY [all] ******************************************************************************************************************************************
TASK [Gathering Facts] ****************************************************************************************************************************** task path: /home/tbowling/src/virt-demo/ansible/net_demo_del.yml:7 ok: [util6vm] META: ran handlers
TASK [rhel-system-roles.network : Set version specific variables] *********************************************************************************** task path: /usr/share/ansible/roles/rhel-system-roles.network/tasks/main.yml:1 ok: [util6vm] => (item=/usr/share/ansible/roles/rhel-system-roles.network/vars/RedHat-6.yml) => {"ansible_facts": {"network_provider_default": "initscripts"}, "ansible_included_var_files": ["/usr/share/ansible/roles/rhel-system-roles.network/vars/RedHat-6.yml"], "changed": false, "item": "/usr/share/ansible/roles/rhel-system-roles.network/vars/RedHat-6.yml"}
TASK [rhel-system-roles.network : Install packages] ************************************************************************************************* task path: /usr/share/ansible/roles/rhel-system-roles.network/tasks/main.yml:9 ok: [util6vm] => {"changed": false, "msg": "", "rc": 0, "results": []}
TASK [rhel-system-roles.network : Enable network service] ******************************************************************************************* task path: /usr/share/ansible/roles/rhel-system-roles.network/tasks/main.yml:14 ok: [util6vm] => {"changed": false, "enabled": true, "name": "network", "state": "started"}
TASK [rhel-system-roles.network : Configure networking connection profiles] *************************************************************************
task path: /usr/share/ansible/roles/rhel-system-roles.network/tasks/main.yml:20
[WARNING]: [017]
[WARNING]: [018]
[WARNING]: [019]
[WARNING]: [020]
[WARNING]: [021]
[WARNING]: [022]
[WARNING]: [023]
[WARNING]: [024]
[WARNING]: [025]
[WARNING]: [026]
[WARNING]: [027]
[WARNING]: [028]
[WARNING]: [029]
[WARNING]: [030]
[WARNING]: [031]
[WARNING]: [032]
ok: [util6vm] => {"changed": false}
TASK [rhel-system-roles.network : Re-test connectivity] ********************************************************************************************* task path: /usr/share/ansible/roles/rhel-system-roles.network/tasks/main.yml:27 ok: [util6vm] => {"changed": false, "ping": "pong"} META: ran handlers META: ran handlers
PLAY RECAP ****************************************************************************************************************************************** util6vm : ok=6 changed=0 unreachable=0 failed=0
But bond still exists
[root@util6vm network-scripts]# ifconfig
dbbond Link encap:Ethernet HWaddr 52:54:00:46:55:59
inet addr:192.168.75.137 Bcast:192.168.75.255 Mask:255.255.255.0
inet6 addr: fe80::5054:ff:fe46:5559/64 Scope:Link
UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
RX packets:63 errors:0 dropped:0 overruns:0 frame:0
TX packets:20 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:13311 (12.9 KiB) TX bytes:2019 (1.9 KiB)
playbook output trying to down the interface aftwards
[root@tabserver ansible]# ansible-playbook -l util6vm net_demo_del.yml -vv ansible-playbook 2.5.1 config file = /etc/ansible/ansible.cfg configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.7/site-packages/ansible executable location = /bin/ansible-playbook python version = 2.7.5 (default, Feb 20 2018, 09:19:12) [GCC 4.8.5 20150623 (Red Hat 4.8.5-28)] Using /etc/ansible/ansible.cfg as config file
PLAYBOOK: net_demo_del.yml ************************************************************************************************************************** 1 plays in net_demo_del.yml
PLAY [all] ******************************************************************************************************************************************
TASK [Gathering Facts] ****************************************************************************************************************************** task path: /home/tbowling/src/virt-demo/ansible/net_demo_del.yml:7 ok: [util6vm] META: ran handlers
TASK [rhel-system-roles.network : Set version specific variables] *********************************************************************************** task path: /usr/share/ansible/roles/rhel-system-roles.network/tasks/main.yml:1 ok: [util6vm] => (item=/usr/share/ansible/roles/rhel-system-roles.network/vars/RedHat-6.yml) => {"ansible_facts": {"network_provider_default": "initscripts"}, "ansible_included_var_files": ["/usr/share/ansible/roles/rhel-system-roles.network/vars/RedHat-6.yml"], "changed": false, "item": "/usr/share/ansible/roles/rhel-system-roles.network/vars/RedHat-6.yml"}
TASK [rhel-system-roles.network : Install packages] ************************************************************************************************* task path: /usr/share/ansible/roles/rhel-system-roles.network/tasks/main.yml:9 ok: [util6vm] => {"changed": false, "msg": "", "rc": 0, "results": []}
TASK [rhel-system-roles.network : Enable network service] ******************************************************************************************* task path: /usr/share/ansible/roles/rhel-system-roles.network/tasks/main.yml:14 ok: [util6vm] => {"changed": false, "enabled": true, "name": "network", "state": "started"}
TASK [rhel-system-roles.network : Configure networking connection profiles] *************************************************************************
task path: /usr/share/ansible/roles/rhel-system-roles.network/tasks/main.yml:20
[WARNING]: [017]
[WARNING]: [018]
[WARNING]: [019]
[WARNING]: [020]
[WARNING]: [021]
[WARNING]: [022]
[WARNING]: [023]
[WARNING]: [024]
[WARNING]: [025]
[WARNING]: [026]
[WARNING]: [027]
[WARNING]: [028]
[WARNING]: [029]
[WARNING]: [030]
[WARNING]: [031]
[WARNING]: [032]
changed: [util6vm] => {"changed": true}
TASK [rhel-system-roles.network : Re-test connectivity] ********************************************************************************************* task path: /usr/share/ansible/roles/rhel-system-roles.network/tasks/main.yml:27 ok: [util6vm] => {"changed": false, "ping": "pong"} META: ran handlers META: ran handlers
PLAY RECAP ****************************************************************************************************************************************** util6vm : ok=6 changed=1 unreachable=0 failed=0
[root@tabserver ansible]# [root@tabserver ansible]# [root@tabserver ansible]# ansible-playbook -l util6vm net_demo_del.yml -vv ansible-playbook 2.5.1 config file = /etc/ansible/ansible.cfg configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.7/site-packages/ansible executable location = /bin/ansible-playbook python version = 2.7.5 (default, Feb 20 2018, 09:19:12) [GCC 4.8.5 20150623 (Red Hat 4.8.5-28)] Using /etc/ansible/ansible.cfg as config file
PLAYBOOK: net_demo_del.yml ************************************************************************************************************************** 1 plays in net_demo_del.yml
PLAY [all] ******************************************************************************************************************************************
TASK [Gathering Facts] ****************************************************************************************************************************** task path: /home/tbowling/src/virt-demo/ansible/net_demo_del.yml:7 ok: [util6vm] META: ran handlers
TASK [rhel-system-roles.network : Set version specific variables] *********************************************************************************** task path: /usr/share/ansible/roles/rhel-system-roles.network/tasks/main.yml:1 ok: [util6vm] => (item=/usr/share/ansible/roles/rhel-system-roles.network/vars/RedHat-6.yml) => {"ansible_facts": {"network_provider_default": "initscripts"}, "ansible_included_var_files": ["/usr/share/ansible/roles/rhel-system-roles.network/vars/RedHat-6.yml"], "changed": false, "item": "/usr/share/ansible/roles/rhel-system-roles.network/vars/RedHat-6.yml"}
TASK [rhel-system-roles.network : Install packages] ************************************************************************************************* task path: /usr/share/ansible/roles/rhel-system-roles.network/tasks/main.yml:9 ok: [util6vm] => {"changed": false, "msg": "", "rc": 0, "results": []}
TASK [rhel-system-roles.network : Enable network service] ******************************************************************************************* task path: /usr/share/ansible/roles/rhel-system-roles.network/tasks/main.yml:14 ok: [util6vm] => {"changed": false, "enabled": true, "name": "network", "state": "started"}
TASK [rhel-system-roles.network : Configure networking connection profiles] ************************************************************************* task path: /usr/share/ansible/roles/rhel-system-roles.network/tasks/main.yml:20 fatal: [util6vm]: FAILED! => {"changed": false, "msg": "fatal error: configuration error: connections[0].name: state "down" references non-existing connection "dbbond""} to retry, use: --limit @/home/tbowling/src/virt-demo/ansible/net_demo_del.retry
PLAY RECAP ****************************************************************************************************************************************** util6vm : ok=4 changed=0 unreachable=0 failed=1
can you please provide the playbook?
I only added the 3 dbbond "down" tasks on the second attempt.
---
# Template playbook for Linux System Roles
# https://linux-system-roles.github.io/
# https://galaxy.ansible.com/linux-system-roles/
# https://github.com/linux-system-roles/
- hosts: all
become: yes
become_method: sudo
become_user: root
vars:
# network_provider: nm # or initscripts
network_provider: initscripts
network_connections:
# - name: "Auto eth1"
# mac: "{{ hostvars[inventory_hostname].net1_mac }}"
# state: absent
- name: dbbond
state: down
- name: dbbond-link1
state: down
- name: dbbond-link2
state: down
- name: "Auto eth2"
state: absent
- name: "Auto eth3"
state: absent
- name: "Auto eth4"
state: absent
- name: "Auto eth5"
state: absent
- name: eth1
state: absent
- name: eth2
state: absent
- name: eth3
state: absent
- name: eth4
state: absent
- name: eth5
state: absent
- name: net1
state: absent
- name: dbbond
state: absent
- name: dbbond-link1
state: absent
- name: dbbond-link2
state: absent
- name: webbond
state: absent
- name: webbond-link1
state: absent
- name: webbond-link2
state: absent
roles:
- role: rhel-system-roles.network
Thanks. I formatted your comment.
fatal: [util6vm]: FAILED! => {"changed": false, "msg": "fatal error: configuration error: connections[0].name: state "down" references non-existing connection "dbbond""}
when you use the states up or down, the same playbook must also define the profile. You cannot just call down without specifying how the profile looks. It's a configuration error, and the role tells you that.
It is this way, because state:down essentially corresponds to a call to ifdown. Even if you just want to down the interface, you still need the ifcfg file for calling ifdown.
I don't think that the role should add API to bypass initscripts (e.g. taking down an interface without calling ifdown). What should it do? ip route flush && ip addr flush && killall dhclient?
The way to do it with the role, is to define a profile (state:present), call ifdown on it (state: down), and delete it again state:absent). This has problems, because you would do this to delete a profile, whenever you want to migrate to a newer configuration. That means, you want to migrate once, but running the playbook multiple times should result in no changes. Maybe there could be a new flag "if_exists", that works like:
network_connections:
- name: oldprofile
type: ethernet
if_exists: oldprofile
- name: oldprofile
state: down
if_exists: oldprofile
- name: oldprofile
state: absent
if you run above profile multiple times, the if_exists parts would be skipped and you could use it for migration.
on RHEL 6, when removing a bond using state=absent, it removes the config files but does not actually down the bond interface.
Subsequent attempts to down the bond via playbook fails as it states that there is no connection defined.
Needs the ability to modify a physical or virtual interface, regardless of a configuration or "connect profile" existing.
#41 might help here and the other problem is that there are currently only four states (up, down, present, absent) but there are actual 7-9 states that people might want to configure (absent, present, down, up) for the profile and absent, down, up for the interface) since not all configurations and all state changes make sense. However, "state: absent" currently only/mainly defines the state of the profile not the state of the device. Therefore finishing the definition of the states should also allow these use cases.
I don't think that the role should add API to bypass initscripts (e.g. taking down an interface without calling ifdown). What should it do? ip route flush && ip addr flush && killall dhclient?
I am considering to add at least a way to call ip link set eth0 down and ip link delete br0 to work around initscripts. But I need to finish the work on defining the states to make sure when to call these commands. I believe ip link set eth0 down should take care of routes, addresses and dhclient.