The Linux bonding driver provides a method for aggregating multiple network interface controllers (NICs) into a single logical bonded interface of two or more so-called (NIC) slaves. The majority of modern Linux distributions (distros) come with a Linux kernel which has the Linux bonding driver (bonding)integrated as a loadable kernel module.
We can check the bonding module by
Linux Teaming driver
The Linux Team driver provides an alternative to bonding driver. it has actually been designed to solve the same problem(s) using a wholly different design and different approach; an approach where special attention was paid to flexibility and efficiency. The best part is that the configuration, management, and monitoring of team driver is significantly improved with no compromise on performance, features, or throughput.The main difference is that Team driver kernel part contains only essential code and the rest of the code (link validation, LACP implementation, decision making, etc.) is run in userspace as a part of teamd daemon.
Features comparison between Bonding and Teaming
source – redhat.com
Feature Bonding Team
source – redhat.com
Migration
We can migrate the existing bond interface to a team interface.
Bond2team is a cli tool used to convert an existing bond interface to team.
To convert a current bond0 configuration to team ifcfg, issue a command as root:
To convert a current bond0 configuration to team ifcfg, and to manually specify the path to the ifcfg file, issue a command as root:
It is also possible to create a team configuration by supplying the bond2team tool with a list of bonding parameters. For example:
Configuration – Mode:active-backup
Step 1:
Check the available network device status
Check the network connection status
Step 2:
Creating a team interface named team0 in active-backup mode
Verify the new team connection is created
Step 3:
Add the slave interface (eno1 and en02) and point the master to the newly created team interface team0
Verify the connection status now . we can see team masters and slaves here
Step 4:
Now assign the IP address to the team0 interface
Check the teaming status. Here we can see team0 interface status and active slave
Step 5:
Checking the redundancy
Now bring down the current active slave interface and check the other interface becomes active
Now we can see the active port is changed to eno2 automatically.
Configuration – Mode:round-robin
Step 1:
Check the network interfaces available
Step 2:
Create a teaming interface with config roundrobin
Step 3:
Create a slave interface and point the master to team1
Step 4:
Assign IP to team1 interface
Step 5:
Check the connecton status
Bandwidth analysis between individual NIC and teaming NIC (mode=round-robin)
I Used another VM(node2) with identical configuraton and followed the same steps and assined the IP 10.0.0.24 to team1 interface.
Step 1:
Installing qperf packege in both the node
Step 2:
Flush the iptables in both the nodes
Step 3:
Now bringing down the teaming interface from both the nodes to check the bandwith by keeping only individual NIC
Step 4:
Now starting qperf in node2
Step 5:
Now running qperf for 5 minutes to check the tcp and udp bandwidth between node 1 and node 2 .
The IP 10.0.0.22 is assined to NIC eno16777728 of node2
Step 6:
Now bandwidth of individual NIC is tested, lets bring up team interface and bring down individual NIC
Step 7:
Starting qperf on node2
Step 8:
Now running qperf for 5 minutes to check the tcp and udp bandwidth between node 1 and node 2 .
The IP 10.0.0.24 is assined to team1 interface of node2
Now we can see the bandwith between individual NIC and team interface is almost same.
Discussion and feedback