[FEATURE] Change OS network tool from wicked to the NetworkManager
Is your enhancement related to a problem? Please describe. Consider changing the OS network tool from wicked to NetworkManager, this is because NetworkManager is the default network tool in SLE-Micro-for-Rancher 5.3 and may provide the following pros.
- https://documentation.suse.com/sle-micro/5.3/html/SLE-Micro-all/cha-nm-vs-wicked.html
Describe the solution you'd like Replace wicked with NetworkManager, and it should be compatible with upgrade changes.
Describe alternatives you've considered N/A
Additional context
- Related to https://github.com/harvester/os2/pull/54 change
@mingshuoqiu Please check if we need this change for vxlan.
Not really necessary. The VxLAN won't rely on NetworkManger
Assigning to myself for now as I'm investigating as part of #7025
Pre Ready-For-Testing Checklist
-
[x] If labeled: require/HEP Has the Harvester Enhancement Proposal PR submitted? The HEP PR is at: https://github.com/harvester/harvester/pull/9039
-
[ ] Where is the reproduce steps/test steps documented? The reproduce steps/test steps are at:
- From the HEP (https://github.com/harvester/harvester/pull/9039):
- Install Harvester and verify that the various different ways of configuring networking all work:
- Static IP
- DHCP
- VLAN
- MTU
- Bond Options
- Management interface MAC address
- Install Harvester v1.6.x then upgrade to v1.7.0, and verify that the upgrade succeeds and networking continues to operate correctly.
- Install Harvester and verify that the various different ways of configuring networking all work:
- Please be aware of two current upgrade issues:
- https://github.com/harvester/harvester/issues/9260
- https://github.com/harvester/harvester/issues/9298
- From the HEP (https://github.com/harvester/harvester/pull/9039):
-
[x] Have the backend code been merged (harvester, harvester-installer, etc) (including
backport-needed/*)? The PR is at:-
https://github.com/harvester/os2/pull/212
-
https://github.com/harvester/harvester-installer/pull/1141
-
https://github.com/harvester/harvester-installer/pull/1150
-
https://github.com/harvester/harvester-installer/pull/1159
-
https://github.com/harvester/harvester/pull/9200
- [x] Does the PR include the explanation for the fix or the feature?
-
-
[ ] If labeled: require/doc, require/knowledge-base Has the necessary document PR submitted or merged? The documentation/KB PR is at: https://github.com/harvester/docs/pull/897
- [ ] If NOT labeled: not-require/test-plan Has the e2e test plan been merged? Have QAs agreed on the automation test case? If only test case skeleton w/o implementation, have you created an implementation issue?
- The automation skeleton PR is at:
- The automation test case PR is at:
@albinsun , net-install w/ 802.3AD was looked at here: https://github.com/harvester/harvester/issues/9393
@albinsun , net-install w/ 802.3AD was looked at here: #9393
Hi @irishgordo
For different bond modes, if I understand it correctly it depends on the env. network setup (balance-tlb in the lab), any way we can test this?
cc. @tserong
I'm not sure how to physically test this in the lab (sorry!) but one thing to check on the configuration side, is that the selected mode is correctly set in the [bond] section of the /etc/NetworkManager/system-connections/bond-mgmt.nmconnection file. For example, by default with active-backup, you should see something like this:
# cat /etc/NetworkManager/system-connections/bond-mgmt.nmconnection
[connection]
id=bond-mgmt
type=bond
interface-name=mgmt-bo
master=mgmt-br
slave-type=bridge
[ethernet]
[bond]
miimon=100
mode=active-backup
[bridge-port]
If you select a different mode, you should see a change in the mode= line above.
...also I've just learned another way to check that, is by asking NetworkManager directly what the bond options are for a given interface:
# nmcli --fields bond.options con show bond-mgmt
bond.options: mode=active-backup,miimon=100
Test PASS, close as done.
Environment
-
harvester-v1.7.0-rc3- Profile: 3 nodes (witness) AMD64
- QEMU/KVM
- Bare-metal HPE ProLiant DL360
- ui-source:
Auto
- Profile: 3 nodes (witness) AMD64
Test Scenario
IP
- [x] Static => no VLAN - 3 nodes, BIOS, default MTU
- [x] DHCP => no VLAN - 3 nodes, BIOS, default MTU
- [x] DHCP (MAC Binding) => no VLAN - 3 nodes, BIOS, default MTU
VLAN
- [x] no VLAN => no VLAN
- [x] w/ VLAN => w/ VLAN
MTU
- [x] default => no VLAN - 3 nodes, BIOS, default MTU
- [x] custom => no VLAN - 3 nodes witness, UEFI, custom MTU
Bond Options => Different bondmodes (Single node)
- [x]
active-backup - [x]
balance-rr - [x]
balance-xor - [x]
balance-tlb - [x]
balance-alb - [x]
broadcast - [x]
802.3ad=> https://github.com/harvester/harvester/issues/3418#issuecomment-3470776264
no VLAN (Local ipxe-example)
:green_circle: 3 nodes, BIOS, default MTU
-
Setup node0 with DHCP IP and MAC mapped DHCP VIP
-
Console
-
Cluster
-
VM works
-
-
Join node1 with static IP
-
Console
-
Cluster
-
VM works
-
-
Join node2 with DHCP IP
-
Console
-
Cluster
-
VM works
-
🟢 3 nodes witness, UEFI, custom MTU
-
Setup node0 with static IP, custom MTU and MAC mapped DHCP VIP
-
Console
-
Cluster
-
VM works
-
-
Join node1 with static IP and custom MTU
-
Console
-
Cluster
-
VM works
-
-
Join witness node2 with static IP and custom MTU
-
Console
-
Cluster
-
VM works
-
🟢 Different bondmodes (Single node)
-
balance-rr
-
balance-xor
-
balance-tlb
-
balance-alb
-
boradcast
w/ VLAN (Baremetal Lab)
:green_circle: 2 nodes, UEFI, default MTU
-
Setup node0 with DHCP IP
-
Console
-
Cluster
-
VM works
-
-
Join node1 with DHCP IP
-
Console
-
Cluster
-
VM works
-
Upgrade
Will cover in release testing