Red Hat Enterprise Linux allows administrators to bind multiple network interfaces together into a single channel using the bonding kernel module and a special network interface called a channel bonding interface. Channel bonding enables two or more network interfaces to act as one, simultaneously increasing the bandwidth and providing redundancy. The behaviour of the bonded interfaces depends upon the mode, either hot standby or load balancing service.
For each server, when bonding interfaces in pair for high availability make sure that the bonding is done cross network cards, bonding mode we are using is “active-backup”:
network card 1 port 1 bound with network card 2 port 1
network card 1 port 2 bound with network card 2 port 2
network card 1 port 3 bound with network card 2 port 3
network card 1 port 4 bound with network card 2 port 4
etc.
Steps for configuring bonding
In this tutorial we are configuring bond0 with interfaces eth0 and eth1
Step 1- Load Kernel module
For a channel bonding interface to be valid, the kernel module must be loaded. To ensure that the module is loaded when the channel bonding interface is brought up, create a new file as root named
Replace
In this example we are configuring bond0 and file name is bonding.conf
# cat /etc/modprobe.d/bonding.conf
alias bond0 bonding
Step2- create channel bonding interface
We need to create a channel bonding interface configuration file on /etc/sysconfig/network-scripts/ directory called ifcfg-bond
# cat ifcfg-bond0
DEVICE=bond0
IPADDR=172.16.1.207
NETMASK=255.255.255.0
ONBOOT=yes
BOOTPROTO=none
USERCTL=no
BONDING_OPTS="mode=0 miimon=1000"
Step 3- Configure Network interfaces
After the channel bonding interface is created, the network interfaces to be bound together must be configured by adding the MASTER= and SLAVE= directives to their configuration files. The configuration files for each of the channel-bonded interfaces can be nearly identical. For example, if two Ethernet interfaces are being channel bonded, both eth0 and eth1 may look like the following example.
Interface eth0 configuration# cat ifcfg-eth0
DEVICE=eth0
ONBOOT=yes
MASTER=bond0
SLAVE=yes
BOOTPROTO=none
USERCTL=no
TYPE=EthernetInterface eth1 configuration
# cat ifcfg-eth1
DEVICE=eth1
ONBOOT=yes
MASTER=bond0
SLAVE=yes
BOOTPROTO=none
TYPE=Ethernet
USERCTL=no
After configuring the interfaces we have to bring up the bond by using command
# ifconfig bond0 up
If the bonding is correctly configured we can view the configuration using ifconfig command
# ifconfig
bond0 Link encap:Ethernet HWaddr 00:0C:29:69:31:C4
inet addr:172.16.1.207 Bcast:172.16.1.255 Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fe69:31c4/64 Scope:Link
UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
RX packets:19676 errors:0 dropped:0 overruns:0 frame:0
TX packets:342 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1623240 (1.5 MiB) TX bytes:42250 (41.2 KiB)eth0 Link encap:Ethernet HWaddr 00:0C:29:69:31:C4
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:10057 errors:0 dropped:0 overruns:0 frame:0
TX packets:171 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:832257 (812.7 KiB) TX bytes:22751 (22.2 KiB)
Interrupt:19 Base address:0x2000eth1 Link encap:Ethernet HWaddr 00:0C:29:69:31:C4
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:9620 errors:0 dropped:0 overruns:0 frame:0
TX packets:173 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:791043 (772.5 KiB) TX bytes:20207 (19.7 KiB)
Interrupt:19 Base address:0x2080lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:2 errors:0 dropped:0 overruns:0 frame:0
TX packets:2 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:104 (104.0 b) TX bytes:104 (104.0 b)
To view all existing bonds we can run following command, it will list bond0
# cat /sys/class/net/bonding_masters
bond0
To view the existing mode of bonding we can use following command
# cat /sys/class/net/bond0/bonding/mode
balance-rr 0
For verifying bonding , we can use following command. It will list bonding details
# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.5.0 (November 4, 2008)Bonding Mode: load balancing (round-robin)
MII Status: up
MII Polling Interval (ms): 1000
Up Delay (ms): 0
Down Delay (ms): 0Slave Interface: eth0
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:0c:29:69:31:c4Slave Interface: eth1
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:0c:29:69:31:ce
Bonding modes
Several policies are available in bonding, this mode can be set using directive mode=
The
balance-rr or 0 — Sets a round-robin policy for fault tolerance and load balancing. Transmissions are received and sent out sequentially on each bonded slave interface beginning with the first one available.
active-backup or 1 — Sets an active-backup policy for fault tolerance. Transmissions are received and sent out via the first available bonded slave interface. Another bonded slave interface is only used if the active bonded slave interface fails.
balance-xor or 2 — Sets an XOR (exclusive-or) policy for fault tolerance and load balancing. Using this method, the interface matches up the incoming request’s MAC address with the MAC address for one of the slave NICs. Once this link is established, transmissions are sent out sequentially beginning with the first available interface.
broadcast or 3 — Sets a broadcast policy for fault tolerance. All transmissions are sent on all slave interfaces.
802.3ad or 4 — Sets an IEEE 802.3ad dynamic link aggregation policy. Creates aggregation groups that share the same speed and duplex settings. Transmits and receives on all slaves in the active aggregator. Requires a switch that is 802.3ad compliant.
balance-tlb or 5 — Sets a Transmit Load Balancing (TLB) policy for fault tolerance and load balancing. The outgoing traffic is distributed according to the current load on each slave interface. Incoming traffic is received by the current slave. If the receiving slave fails, another slave takes over the MAC address of the failed slave.
balance-alb or 6 — Sets an Active Load Balancing (ALB) policy for fault tolerance and load balancing. Includes transmit and receive load balancing for IPV4 traffic. Receive load balancing is achieved through ARP