Network interface bonding is called by many names: Port Trunking, Channel Bonding, Link Aggregation, NIC teaming, and others. It combines or aggregates multiple network connections into a single channel bonding interface. This allows two or more network interfaces to act as one, to increase throughput and to provide redundancy or failover.

 

The Linux kernel comes with the bonding driver for aggregating multiple physical network interfaces into a single logical interface (for example, aggregating eth0 and eth1 into bond0). For each bonded interface you can define the mode and the link monitoring options. There are seven different mode options, each providing specific load balancing and fault tolerance characteristics.

 

Network Bonding modes

 

The following bonding policy modes are available:

Round-robin: This is the default mode. Network transmissions are in sequential order beginning with the first available slave. This mode provides load balancing and fault tolerance.

Active backup: Only one slave in the bond is active. Another slave interface becomes active if the active slave interface fails. The bond’s MAC address is externally visible on only one network adapter to avoid confusing a network switch. This mode provides fault tolerance.

XOR (exclusive-or): Network transmissions are based on a transmit hash policy. The default policy derives the hash by using MAC addresses. In this mode, network transmission destined for specific peers are always sent over the same slave interface. This mode works best for traffic to peers on the same link or local network. This mode provides load balancing and fault tolerance.

Broadcast: All network transmissions are sent on all slave interfaces. This mode provides fault tolerance.

802.3ad: Uses an IEEE 802.3ad dynamic link aggregation policy. Aggregation groups share the same speed and duplex settings. This mode transmits and receives network traffic on all slaves in the active aggregator. This mode requires an 802.3ad-compliant network switch.

Adaptive transmit load balancing (TLB): Outgoing network traffic is distributed according to the current load on each slave interface. Incoming traffic is received by the current slave. If the receiving slave fails, another slave takes over the MAC address of the failed slave. This mode does not require any special switch support.

Adaptive load balancing (ALB): This mode includes transmit load balancing (tlb) and receive load balancing (rlb) for IPv4 traffic and does not require any special switch support. Receive load balancing is achieved by ARP negotiation.

See the /usr/share/doc/iputils-*/README.bonding file for complete descriptions of the available bonding policy modes. The tbale below gives the summary and comparison of the Network Bonding modes.

 

Mode Policy How it works Fault Tolerance Load balancing
0 Round Robin packets are sequentially transmitted/received through each interfaces one by one. No Yes
1 Active Backup one NIC active while another NIC is asleep. If the active NIC goes down, another NIC becomes active. only supported in x86 environments. Yes No
2 XOR [exclusive OR] In this mode the, the MAC address of the slave NIC is matched up against the incoming request’s MAC and once this connection is established same NIC is used to transmit/receive for the destination MAC. Yes Yes
3 Broadcast All transmissions are sent on all slaves Yes No
4 Dynamic Link Aggregation aggregated NICs act as one NIC which results in a higher throughput, but also provides failover in the case that a NIC fails. Dynamic Link Aggregation requires a switch that supports IEEE 802.3ad. Yes Yes
5 Transmit Load Balancing (TLB) The outgoing traffic is distributed depending on the current load on each slave interface. Incoming traffic is received by the current slave. If the receiving slave fails, another slave takes over the MAC address of the failed slave. Yes Yes
6 Adaptive Load Balancing (ALB) Unlike Dynamic Link Aggregation, Adaptive Load Balancing does not require any particular switch configuration. Adaptive Load Balancing is only supported in x86 environments. The receiving packets are load balanced through ARP negotiation. Yes Yes

 

Network Bonding Link Monitoring

The bonding driver supports two methods to monitor a slave’s link state:

 

MII (Media Independent Interface) Monitor

This is the default link monitoring option. This method monitors only the carrier state of the local network interface. It relies on the device driver for carrier state information, or queries the MII registers directly, or uses ethtool to attempt to obtain carrier state. You can specify the following information for MII monitoring:

  • Monitoring frequency: The time in milliseconds between querying carrier state
  • Link up delay: The time in milliseconds to wait before using a link that is up
  • Link down delay: The time in milliseconds to wait before switching to another link when the active link is reported as down

 

ARP Monitor

This method of link monitoring sends APR queries to peer systems on the network and uses the response as an indication that the link is up. The ARP monitor relies on the device driver to keep the last receive time, and the transmit start time, updated. If the device driver is not updating these times, the ARP monitor fails any slaves that use that device driver. You can specify the following information for APR monitoring:

  • Monitoring frequency: The time in milliseconds that ARP queries are sent
  • ARP targets: A comma-separated list of IP addresses that ARP queries are sent to

 

Hjalp dette svar dig? 0 Kunder som kunne bruge dette svar (0 Stem)