Sunday, August 17, 2025

LACP on FreeBSD

LACP stands for Link Aggregation Control Protocol. It’s a network protocol used to combine multiple physical network links into a single logical link to increase bandwidth and provide redundancy. It’s part of the IEEE 802.3ad standard (now 802.1AX). 

Here’s a breakdown of what it does and why it’s useful:

  • Increases Bandwidth
    • By bundling multiple links (like two or more Ethernet cables) between switches or between a switch and a server, the total throughput can be higher than a single link.
  • Provides Redundancy
    •  If one physical link fails, traffic is automatically rerouted over the remaining links, so the connection stays up.
  • Dynamic Configuration
    • LACP allows devices to automatically detect and configure link aggregation groups, making it easier to manage than static link aggregation.
  • Load Balancing
    • Traffic can be distributed across the aggregated links based on rules like source/destination IP, MAC addresses, or TCP/UDP ports. 

Let's configure and test it in my homelab.

Homelab

In my homelab, I have Force10 S60 L3 switch witch supports LACP.  I will aggregate port GigabitEthernet 0/28 and GigabitEthernet 0/38 into LAG (Link Aggregation Group) Port-channel 1. I use VLAN 4 for datacenter management with IP subnet 192.168.4.0/24 and Force10 switch act as a default gateway with IP address 192.168.4.254. For FreeBSD 14.3 server, I will use IP address 192.168.4.124.

Below is connectivity schema between FreeBSD Server and Force10 Switch in my homelab.

FreeBSD Server and Force10 Switch Connectivity Schema

Switch configuration 

I will demonstrate configuration only on GigabitEthernet 0/38, but it must be done on all physical interfaces participating in port-channel (GigabitEthernet 0/28 in my case).  First of all, all physical interfaces participating in port-channel must have clear (empty) configuration as documented below ...

 f10-s60(conf-if-gi-0/38)#show conf  
 !  
 interface GigabitEthernet 0/38  
  description BHV01-nic1  
  no ip address  
  no shutdown  
 f10-s60(conf-if-gi-0/38)#  

When physical interface has empty configuration, we can assign that physical interface (gi 0/38) into port-channel 1. The desired configuration is documented below ... 

 f10-s60(conf-if-gi-0/38)#show conf  
 !  
 interface GigabitEthernet 0/38  
  description BHV01-nic1  
  no ip address  
 !   
  port-channel-protocol LACP   
  port-channel 1 mode active   
  no shutdown  
 f10-s60(conf-if-gi-0/38)#  

Now we should see port-channel 1.

 f10-s60#sh interfaces port-channel brief  
 Codes: L - LACP Port-channel  
   LAG Mode Status    Uptime   Ports       
 L  1  L3  down     00:00:00    
 f10-s60#  

 f10-s60#conf  
 f10-s60(conf)#int port-channel 1  
 f10-s60(conf-if-po-1)#show conf  
 !  
 interface Port-channel 1  
  no ip address  
  no shutdown  
 f10-s60(conf-if-po-1)#  

We have port-channel 1, and it is in down state and without detailed configuration. Let's set MTU to 9216, enable switchport mode, enable portmode hybrid (to allow one native and multiple tagged VLANs), configure spanning-tree and enable the virtual port-channel interface. We should have configuration documented below ...

f10-s60(conf-if-po-1)#show conf   
  !   
  interface Port-channel 1   
  no ip address   
  mtu 9216   
  switchport 
  portmode hybrid
  spanning-tree rstp edge-port    
  no shutdown   
  f10-s60(conf-if-po-1)#

Do not forget, that VLAN(s) must be allowed for particular port-channel. Let's configure VLAN 4 as a native (untagged) VLAN as it is used for datacenter management ...

 f10-s60(conf-if-vl-4)#show conf  
 !  
 interface Vlan 4  
  description DC-MGMT  
  ip address 192.168.4.254/24  
  untagged GigabitEthernet 0/2-4,6,10,16-19,23,25-27,35-37,43  
  untagged Port-channel 1  
  no shutdown  
 f10-s60(conf-if-vl-4)#  

If you need other VLANs (iSCSI, NFS, various VM traffics, etc.) you can add them later as tagged VLANs levering 802.1Q tagging.

FreeBSD LACP Configuration in /etc/rc.conf

We will use lagg driver to do port aggregation (aka teaming or bonding).  Use man lagg for driver documentation.

The lagg driver currently supports the aggregation protocols 

  • failover (the default)
  • lacp
  • loadbalance
  • roundrobin
  • broadcast
  • none 
The only practically useful for server virtualization host (bhyve) are failover (active/standby) and lacp (hash based load balanced traffic). As we do LACP Aggregation on a switch port, we can use lacp with lagghash l2,l3,l4.  

This is the specific LACP configuration in my /etc/rc.conf

ifconfig_bge2="mtu 9000 up"
ifconfig_bge3="mtu 9000 up"
cloned_interfaces="lagg0"
ifconfig_lagg0="laggproto lacp laggport bge2 laggport bge3 lagghash l2,l3,l4 lacp_fast_timeout"
ifconfig_lagg0_alias0="inet 192.168.4.124/24" 

LAG interface - We use lagg0 as cloned virtual interface aggregating bge2 and bge3

LAG hash algorithm - We use lagghash l2,l3,l4. Various hash algorithms used for traffic load balancing are covered in greater detail later in this blog post.

LACP fast/slow timers - We use fast (1 sec) LACPDU timeout having positive impact on network availability in case of link failure. Later we will see that with such setting the network traffic is failovered in 3 secs.

LACP Fast/Slow Timers - We use the fast (1-second) LACPDU timeout, which improves network availability in the event of a link failure. As we will see later, with this setting, network traffic fails over within approximately 3 seconds, because fail-over is triggered after the loss of 3 LACP Data Units.

Jumbo Frames - In virtualization and storage servers, we usually want to use Jumbo Frames, therefore, physical interfaces (bge2,bge3) are configured for Jumbo Frames (MTU 9000) and on the other side (Force10 Switch) Jumbo Frames must be enabled as well (MTU 9216). 

The topic of Jumbo Frames is covered in greater detail later in this blog post.

FreeBSD LACP Interactive Configuration

The LACP configuration above can be done interactively from FreeBSD shell. This interactive configuration could be helpful during troubleshooting.

# Create the lagg interface
ifconfig lagg0 create

# Add member interfaces and set the LACP protocol
ifconfig lagg0 laggproto lacp laggport bge2 laggport bge3 lagghash l2,l3,l4 lacp_fast_timeout

# Assign an IP address to the lagg0 interface
ifconfig lagg0 inet 192.168.4.124/24 
 

Check LACP status

LACP has few properties which should be validated, understand and properly configured.

  • LACP timers (fast, slow)
  • Load balancing hash algorithms

Let's check LACP Port-channel status on Force10 switch.

 f10-s60#sh interfaces po1 brief        
 Codes: L - LACP Port-channel  
    LAG Mode Status  Uptime    Ports       
 L  1   L2L3 up      00:18:41  Gi 0/28  (Up)  
                               Gi 0/38  (Up)  
 f10-s60#sh interfaces po1 descr      
 Interface          OK  Status  Protocol  Description  
 Port-channel 1     YES up      up        BHYVE01  
 f10-s60#  

We can see that both ports (Gi 0/28, Gi 0/38) participating in port-channel 1 and are up and running. 

A port-channel with the L2L3 mode is a versatile interface that can perform two distinct networking functions simultaneously. 

  • Layer 2 (L2) VLAN Trunking: The port-channel can be configured to carry traffic for one or more VLANs. You would typically use the switchport mode trunk and switchport trunk allowed vlan commands to specify which VLANs are permitted to traverse the aggregated link. This is the L2 part of the configuration.
  • Layer 3 (L3) IP Routing: The port-channel can also have an IP address directly assigned to it. This allows the switch to act as a router for that link, enabling the port-channel to be used as a routed uplink or a termination point for a specific subnet. This is the L3 part. 

LAG 1 (Port-channel 1) is up with both physical ports (Gi 0/28, Gi 0/38) up and running. Port-channel 1 state is OK. That's perfect, so let's focus on LACP timers, LACP hash algorithms, and Jumbo Frames (MTU 9000/9216).

LACP with "fast" or "slow" timers? 

We can configure LACP with "fast" or "slow" timers, which dictates how frequently LACPDU (LACP Data Unit) packets are sent between the LACP-enabled devices.

LACP Timers on Force10 Switch 

It seems that it is not configurable on my Force10 S60 or I did not find the way how to configure it. But I can check LACP timers there by following command ...

 f10-s60#sh lacp 1  
 Port-channel 1 admin up, oper up, mode lacp  
 Actor  System ID: Priority 32768, Address 0001.e896.0203  
 Partner System ID: Priority 32768, Address 90b1.1c12.e751  
 Actor Admin Key 1, Oper Key 1, Partner Oper Key 203  
 LACP LAG 1 is an aggregatable link  
 A - Active LACP, B - Passive LACP, C - Short Timeout, D - Long Timeout  
 E - Aggregatable Link, F - Individual Link, G - IN_SYNC, H - OUT_OF_SYNC  
 I - Collection enabled, J - Collection disabled, K - Distribution enabled  
 L - Distribution disabled, M - Partner Defaulted, N - Partner Non-defaulted,  
 O - Receiver is in expired state, P - Receiver is not in expired state  
 Port Gi 0/38 is enabled, LACP is enabled and mode is lacp  
  Actor    Admin: State ACEHJLMP Key 1 Priority 32768  
           Oper:  State ACEGIKNP Key 1 Priority 32768  
  Partner  Admin: State BDFHJLMP Key 0 Priority 0  
           Oper:  State ADEGIKNP Key 203 Priority 32768  
 f10-s60#  

Actor (Force10 S60) is configured and operates with Short Timeout (aka Fast LACP timers).

Partner (FreeBSD Server) is configured and operates with Long Timeout (aka Slow LACP timers).

Both sides should use the same timers, and Fast LACPDU (Short Timeout) is recommended.

LACP Timers on FreeBSD Server 

FreeBSD could be configured to support Fast LACPDU (Short Timeout) by following ifconfig options

  • lacp_fast_timeout
    • Enable lacp fast-timeout on the interface. 
  • lacp_fast_timeout
    • Disable lacp fast-timeout on the interface.

That's why we have this in /etc/rc.conf

ifconfig_lagg0="laggproto lacp laggport bge2 laggport bge3 lagghash l2,l3,l4 lacp_fast_timeout

When lacp_fast_timeout is set, Force10 Switch reports partner (FreeBSD Server) as C (Short Timeout). 

 f10-s60#sh lacp 1  
 Port-channel 1 admin up, oper up, mode lacp  
 Actor  System ID: Priority 32768, Address 0001.e896.0203  
 Partner System ID: Priority 32768, Address 90b1.1c12.e751  
 Actor Admin Key 1, Oper Key 1, Partner Oper Key 203  
 LACP LAG 1 is an aggregatable link  
 A - Active LACP, B - Passive LACP, C - Short Timeout, D - Long Timeout  
 E - Aggregatable Link, F - Individual Link, G - IN_SYNC, H - OUT_OF_SYNC  
 I - Collection enabled, J - Collection disabled, K - Distribution enabled  
 L - Distribution disabled, M - Partner Defaulted, N - Partner Non-defaulted,  
 O - Receiver is in expired state, P - Receiver is not in expired state  
 Port Gi 0/28 is enabled, LACP is enabled and mode is lacp  
  Actor   Admin: State ACEHJLMP Key 1 Priority 32768  
          Oper:  State ACEGIKNP Key 1 Priority 32768  
  Partner Admin: State BDFHJLMP Key 0 Priority 0  
          Oper:  State ACEGIKNP Key 203 Priority 32768  
 Port Gi 0/38 is enabled, LACP is enabled and mode is lacp  
  Actor   Admin: State ACEHJLMP Key 1 Priority 32768  
          Oper:  State ACEGIKNP Key 1 Priority 32768  
  Partner Admin: State BDFHJLMP Key 0 Priority 0  
          Oper:  State ACEGIKNP Key 203 Priority 32768  
 f10-s60#  

I do not know

  • why Admin State is D (Long Timeout)
  • how to show LACP timeout on FreeBSD server side

but based on Force10 Switch status, it seems that both sides use Fast LACPDU (Short Timeout).

LACP Lag Hash Algorithms

LACP Lag Hash Algorithm on FreeBSD Server 

On FreeBSD Server, the hash algorithm is configured on the lagg0 interface, not the physical member ports. Particular hash algorithm can be configured by adding the lagghash option to ifconfig_lagg0 line in /etc/rc.conf.

ifconfig_lagg0="laggproto lacp laggport bge2 laggport bge3 lagghash l2,l3,l4"

We have following options

  • l2: Layer 2 hashing (based on MAC addresses)
  • l3: Layer 3 hashing (based on IP addresses)
  • l4: Layer 4 hashing (based on TCP/UDP ports)
  • l2,l3: Combines Layer 2 and Layer 3 hashing
  • l2,l3,l4: Combines Layer 2, Layer 3, and Layer 4 hashing. 
    • This is the most common and effective algorithm as it provides the most granular distribution
    • l2,l3,l4 is the default option when lagghash option is omitted

Below is the output from FreeBSD Server in my lab ... 

 root@bhyve01:~ # ifconfig lagg0  
 lagg0: flags=1008843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST,LOWER_UP> metric 0 mtu 9000  
      options=c019b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,TSO4,VLAN_HWTSO,LINKSTATE>  
      ether 90:b1:1c:12:e7:51  
      hwaddr 00:00:00:00:00:00  
      inet 192.168.4.124 netmask 0xffffff00 broadcast 192.168.4.255  
      laggproto lacp lagghash l2,l3,l4  
      laggport: bge2 flags=1c<ACTIVE,COLLECTING,DISTRIBUTING>  
      laggport: bge3 flags=1c<ACTIVE,COLLECTING,DISTRIBUTING>  
      groups: lagg  
      media: Ethernet autoselect  
      status: active  
      nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>  
 root@bhyve01:~ #   

flowid 

Another factor in load balance hash algorithm is ifconfig flag -use_flowid. When configured, the loadbalance and lacp modes will use the RSS hash from the network card if available to avoid computing one. In FreeBSD's ifconfig, the use_flowid option controls whether a network interface uses a hardware-generated RSS hash (Receive Side Scaling) to select the egress port for a packet. This -use_flowid setting is obviously most relevant for Link Aggregation Groups (LAGGs) and other similar interfaces that bundle multiple physical links together. 

When a packet arrives at the NIC, the hardware performs a fast, stateless hash calculation based on a predefined set of fields in the packet header. Common fields used for this hash include:

  • Source IP address
  • Destination IP address
  • Source port
  • Destination port
  • Protocol type (e.g., TCP, UDP)

This is often called 5-tuple hash. The result of this hash is a unique value for a given connection or "flow." This hash value is then used as an index to a lookup table, which maps the hash to a specific receive queue. Each receive queue is typically assigned to a different CPU core. This ensures that all packets belonging to the same connection (e.g., a single TCP stream) are consistently processed by the same CPU core, which is crucial for maintaining packet order and data integrity. The concept of a hardware-generated RSS hash applies to each network interface individually. If a system has two or more interfaces, each one uses its own hardware to perform the RSS hash.  Each NIC independently calculates the RSS hash for the traffic it receives. The hash from bge2 is used to select a CPU core for its incoming packets, and the hash from bge3 is used to select a CPU core for its incoming packets. There's no direct coordination of the hash values between the two separate NICs. In terms of Load Balancing, the goal is to distribute the total network load across all available CPU cores. With two or more interfaces, the system's combined network throughput can be much higher than a single core could handle. By using RSS, the system can utilize the processing power of multiple cores to handle the high volume of traffic from multiple NICs. For instance, a system with two 10 Gigabit Ethernet adapters can potentially handle a combined 20 Gbps of traffic, which would be impossible for a single CPU core to process efficiently without RSS. To check if a network interface supports RSS (Receive Side Scaling) in FreeBSD, you can use the sysctl command to query the kernel.

 root@bhyve01:~ # sysctl -a | grep bge | grep rss << No RSS for bge interfaces 
 root@bhyve01:~ # sysctl -a | grep rss  
 hw.bxe.udp_rss: 0  
 hw.ix.enable_rss: 1  
 root@bhyve01:~ #   

The output above shows that my bge interfaces do not support RSS, which makes sense since RSS is generally relevant only for 10 Gb and faster interfaces. Since my bge interfaces are 1 Gb, they will rely on software hashing instead and RSS is not used, therefore, a single-threaded network stack is used and only one CPU core handles all network traffic. This is generally sufficient to achieve up to 4 Gbps of aggregate throughput across four 1 Gb interfaces I have in my homelab servers. Please note that my homelab runs on older hardware optimized for low power consumption. Once I have the opportunity to test FreeBSD servers with 25 Gb or even 100 Gb interfaces, I plan to publish a dedicated blog post about RSS and other techniques for handling high network throughput. 

LACP Lag Hash Algorithm on Force10 Switch 

I did not find the way, how to configure or view LAG hash algorithm on Force10 Switch. However, we can clear counter statistics and observe Input/Output statistics on physical interfaces to understand the load distribution across interfaces.

 f10-s60#clear counters              
 Clear counters on all interfaces [confirm]   
 f10-s60#show interfaces gigabitethernet 0/38   
 GigabitEthernet 0/38 is up, line protocol is up  
 Port is part of Port-channel 1  
 Description: BHV01-nic1  
 Hardware is DellForce10Eth, address is 00:01:e8:96:02:05  
   Current address is 00:01:e8:96:02:05  
 Pluggable media not present  
 Interface index is 43828226  
 Internet address is not set  
 MTU 9216 bytes, IP MTU 9198 bytes  
 LineSpeed 1000 Mbit, Mode full duplex  
 Auto-mdix enabled, Flowcontrol rx off tx off  
 ARP type: ARPA, ARP Timeout 04:00:00  
 Last clearing of "show interface" counters 00:00:03  
 Queueing strategy: fifo  
 Input Statistics:  
    3 packets, 384 bytes  
    0 64-byte pkts, 0 over 64-byte pkts, 3 over 127-byte pkts  
    0 over 255-byte pkts, 0 over 511-byte pkts, 0 over 1023-byte pkts  
    3 Multicasts, 0 Broadcasts  
    0 runts, 0 giants, 0 throttles  
    0 CRC, 0 overrun, 0 discarded  
 Output Statistics:  
    1 packets, 64 bytes, 0 underruns  
    1 64-byte pkts, 0 over 64-byte pkts, 0 over 127-byte pkts  
    0 over 255-byte pkts, 0 over 511-byte pkts, 0 over 1023-byte pkts  
    0 Multicasts, 1 Broadcasts, 0 Unicasts  
    0 throttles, 0 discarded, 0 collisions  
 Rate info (interval 299 seconds):  
    Input 00.00 Mbits/sec,     0 packets/sec, 0.00% of line-rate  
    Output 00.00 Mbits/sec,     0 packets/sec, 0.00% of line-rate  
 Time since last interface status change: 00:48:38  
 f10-s60#show interfaces gigabitethernet 0/28   
 GigabitEthernet 0/28 is up, line protocol is up  
 Port is part of Port-channel 1  
 Description: BHV01-nic2  
 Hardware is DellForce10Eth, address is 00:01:e8:96:02:05  
   Current address is 00:01:e8:96:02:05  
 Pluggable media not present  
 Interface index is 41206786  
 Internet address is not set  
 MTU 9216 bytes, IP MTU 9198 bytes  
 LineSpeed 1000 Mbit, Mode full duplex  
 Auto-mdix enabled, Flowcontrol rx off tx off  
 ARP type: ARPA, ARP Timeout 04:00:00  
 Last clearing of "show interface" counters 00:02:35  
 Queueing strategy: fifo  
 Input Statistics:  
    155 packets, 20287 bytes  
    0 64-byte pkts, 4 over 64-byte pkts, 151 over 127-byte pkts  
    0 over 255-byte pkts, 0 over 511-byte pkts, 0 over 1023-byte pkts  
    151 Multicasts, 0 Broadcasts  
    0 runts, 0 giants, 0 throttles  
    0 CRC, 0 overrun, 0 discarded  
 Output Statistics:  
    200 packets, 15283 bytes, 0 underruns  
    185 64-byte pkts, 8 over 64-byte pkts, 7 over 127-byte pkts  
    5 over 255-byte pkts, 0 over 511-byte pkts, 0 over 1023-byte pkts  
    81 Multicasts, 114 Broadcasts, 5 Unicasts  
    0 throttles, 0 discarded, 0 collisions  
 Rate info (interval 299 seconds):  
    Input 00.00 Mbits/sec,     0 packets/sec, 0.00% of line-rate  
    Output 00.00 Mbits/sec,     0 packets/sec, 0.00% of line-rate  
 Time since last interface status change: 00:51:07  
 f10-s60#  

JumboFrames (MTU 9000/9216)

I would like to use FreeBSD server as Virtualization and Storage Server. Jumbo Frames (MTU 9216 on switch, MTU 9000 on server) has performance benefit because less frame segmentation means less CPU cycles and higher network throughput. 

In Force10 switch (Dell Networking), we set the MTU 9216 on the port-channel interface, and also on individual physical interfaces. 

 f10-s60#show running-config interface port-channel 1  
 !  
 interface Port-channel 1  
  description BHYVE01  
  no ip address  
  mtu 9216  
  switchport  
  spanning-tree rstp edge-port   
  no shutdown  
 f10-s60#show running-config interface gigabitethernet 0/28  
 !  
 interface GigabitEthernet 0/28  
  description BHV01-nic2  
  no ip address  
  mtu 9216  
 !   
  port-channel-protocol LACP   
  port-channel 1 mode active   
  no shutdown  
 f10-s60#show running-config interface gigabitethernet 0/38  
 !  
 interface GigabitEthernet 0/38  
  description BHV01-nic1  
  no ip address  
  mtu 9216  
 !   
  port-channel-protocol LACP   
  port-channel 1 mode active   
  no shutdown  
 f10-s60#  

In FreeBSD server, we set the MTU on the physical interfaces (bge2, bge3) that are members of the LACP bond. The lagg0 virtual interface will inherit the MTU from its member ports. It is stored in /etc/rc.conf

 ifconfig_bge2="mtu 9000 up"  
 ifconfig_bge3="mtu 9000 up"  

Correct configuration of Jumbo Frames (MTU 9000) can be tested by following command ...

 root@bhyve01:~ # ping -s 8972 -D 192.168.4.254  
 PING 192.168.4.254 (192.168.4.254): 8972 data bytes  
 8980 bytes from 192.168.4.254: icmp_seq=0 ttl=255 time=1.605 ms  
 8980 bytes from 192.168.4.254: icmp_seq=1 ttl=255 time=1.385 ms  
 8980 bytes from 192.168.4.254: icmp_seq=2 ttl=255 time=1.329 ms  
 ^C  
 --- 192.168.4.254 ping statistics ---  
 3 packets transmitted, 3 packets received, 0.0% packet loss  
 round-trip min/avg/max/stddev = 1.329/1.440/1.605/0.119 ms  
 root@bhyve01:~ #   

Bigger frame should not be transferred ...

 root@bhyve01:~ # ping -s 8973 -D 192.168.4.254  
 PING 192.168.4.254 (192.168.4.254): 8973 data bytes  
 ping: sendto: Message too long  
 ping: sendto: Message too long  
 ping: sendto: Message too long  
 ^C  
 --- 192.168.4.254 ping statistics ---  
 3 packets transmitted, 0 packets received, 100.0% packet loss  
 root@bhyve01:~ #   

This is expected behavior, so Jumbo Frames works as expected. 

Test link high availability

Let's test high of LACP LAG. We can test it by ping bhyve01.home.uw.cz and ...

  1. Administratively shutdown gi 0/28
  2. Watch how long takes fail-over from 0/28 to 0/38
  3. Put gi 0/28 back up (no shutdown)
  4. Watch if the traffic fail-back (it should stay on 0/38)
  5. Administratively shutdown gi 0/38
  6. Watch how long takes fail-over from 0/38 to 0/28

 dpasek@Davids-MacBook-Pro ~ % ping bhyve01.home.uw.cz  
 PING bhyve01.home.uw.cz (192.168.4.124): 56 data bytes  
 64 bytes from 192.168.4.124: icmp_seq=0 ttl=59 time=18.528 ms  
 64 bytes from 192.168.4.124: icmp_seq=1 ttl=59 time=20.421 ms  
 64 bytes from 192.168.4.124: icmp_seq=2 ttl=59 time=23.239 ms  
 ...
 64 bytes from 192.168.4.124: icmp_seq=60 ttl=59 time=21.930 ms  
 64 bytes from 192.168.4.124: icmp_seq=61 ttl=59 time=20.994 ms  
 64 bytes from 192.168.4.124: icmp_seq=62 ttl=59 time=18.439 ms  
 Request timeout for icmp_seq 63  <<< gi0/28 shutdown
 Request timeout for icmp_seq 64  <<< gi0/28 shutdown
 Request timeout for icmp_seq 65  <<< gi0/28 shutdown
 64 bytes from 192.168.4.124: icmp_seq=66 ttl=59 time=19.156 ms <<< <<< gi0/28 no shutdown
 64 bytes from 192.168.4.124: icmp_seq=67 ttl=59 time=27.192 ms  
 64 bytes from 192.168.4.124: icmp_seq=68 ttl=59 time=19.301 ms  
 ...
 64 bytes from 192.168.4.124: icmp_seq=118 ttl=59 time=20.738 ms  
 64 bytes from 192.168.4.124: icmp_seq=119 ttl=59 time=20.308 ms  
 64 bytes from 192.168.4.124: icmp_seq=120 ttl=59 time=20.069 ms  
 Request timeout for icmp_seq 121  <<< gi0/38 shutdown
 Request timeout for icmp_seq 122  <<< gi0/38 shutdown
 Request timeout for icmp_seq 123  <<< gi0/38 shutdown
 64 bytes from 192.168.4.124: icmp_seq=124 ttl=59 time=24.653 ms <<< gi0/38 no shutdown  
 64 bytes from 192.168.4.124: icmp_seq=125 ttl=59 time=21.158 ms  
 64 bytes from 192.168.4.124: icmp_seq=126 ttl=59 time=20.305 ms  
 64 bytes from 192.168.4.124: icmp_seq=127 ttl=59 time=25.554 ms  
 64 bytes from 192.168.4.124: icmp_seq=128 ttl=59 time=19.826 ms  
 64 bytes from 192.168.4.124: icmp_seq=129 ttl=59 time=26.932 ms  
 ^C  
 --- bhyve01.home.uw.cz ping statistics ---  
 130 packets transmitted, 124 packets received, 4.6% packet loss  
 round-trip min/avg/max/stddev = 17.571/20.970/30.353/2.720 ms  
 dpasek@Davids-MacBook-Pro ~ %   

Test results

  • Fail-over from gi0/28 to gi0/38 => 3 seconds (as expected, because of lacp_fast_timeout - 1 sec)
  • Fail-over from gi0/38 to gi0/28 => 3 seconds (as expected, because of lacp_fast_timeout - 1 sec)
  • Does the traffic fail-back from gi0/38 to gi0/28 when gi0/28 is returned to port-channel => NO (by default fail-back is not done because there is second outage when gi0/38 is shutdown)

Conclusion

LACP has positive impact on availability (two links in port-channel), performance (aggregated bandwidth with traffic load-balancing based on defined hash), and manageability (configuration can be done on single virtual interface Po1 instead of multiple physical interfaces).

To be honest, in my homelab the availability is limited, because both links (bge2,bge3) are connected to single Force10 Switch because I do not have two switches in my homelab, and in case of two switches, these switches must support some kind of MLAG (Multi-Chassis Link Aggregation) like Cisco Virtual Port-Channel (vPC), Force10 Virtual Link Trunking (VLT), etc.  

However, it is crucial to have both ends (Server <-> Switch) under control and fully understand and test how LACP works.

No comments:

Post a Comment