On Wed, Apr 19, 2023 at 6:37 AM Lew Wolfgang wrote:
On 4/18/23 21:03, Andrei Borzenkov wrote:
On 19.04.2023 02:14, Lew Wolfgang wrote:
Hi Folks,
We've been collecting data through four 1-GbE Ethernet interfaces
configured as independent connections on different subnets. This
has been working well enough for years.
But now someone has suggested bonding those four interfaces into
one virtual interface. I've never had any experience with this sort
of thing, it looks like there's "bonding" and a newer thing called
"teaming". We're already getting 950-Mb/channel so we'd be looking
more for load-leveling and fall-over reliability.
I wonder if any of you might have some suggestions or experience
you might share?
We have been using bonding in a system that provides disk storage to a
cluster of computers. It is a very data intensive analysis system. The
clients are other openSUSE systems, as well as Windows systems. It has
been working great.
After setting it up (in YaST) some devices will be identified as
MASTERS and some as SLAVES. And a new interface that is the bound
device is made. My setup looks like this;
1: lo: mtu 65536 qdisc noqueue state UNKNOWN
group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: eth2: mtu 1500 qdisc mq
master bond1 state UP group default qlen 1000
link/ether ac:1f:6b:a4:e5:88 brd ff:ff:ff:ff:ff:ff
altname enp101s0f0
3: eth0: mtu 1500 qdisc mq state
DOWN group default qlen 1000
link/ether ac:1f:6b:db:c4:66 brd ff:ff:ff:ff:ff:ff
altname eno1
altname enp4s0
4: eth3: mtu 1500 qdisc mq
master bond1 state UP group default qlen 1000
link/ether ac:1f:6b:a4:e5:88 brd ff:ff:ff:ff:ff:ff
altname enp101s0f1
5: eth4: mtu 1500 qdisc mq state UP
group default qlen 1000
link/ether ac:1f:6b:a4:e5:8a brd ff:ff:ff:ff:ff:ff
altname enp101s0f2
inet 172.22.1.1/16 brd 172.22.255.255 scope global eth4
valid_lft forever preferred_lft forever
6: eth1: mtu 1500 qdisc mq state
DOWN group default qlen 1000
link/ether ac:1f:6b:db:c4:67 brd ff:ff:ff:ff:ff:ff
altname eno2
altname enp5s0
7: eth5: mtu 1500 qdisc noop state DOWN group
default qlen 1000
link/ether ac:1f:6b:a4:e5:8b brd ff:ff:ff:ff:ff:ff
altname enp101s0f3
8: eth6: mtu 1500 qdisc mq state UP
group default qlen 1000
link/ether ac:1f:6b:f5:dd:3c brd ff:ff:ff:ff:ff:ff
altname enp102s0f0
altname ens7f0
inet 10.2.184.3/26 brd 10.2.184.63 scope global eth6
valid_lft forever preferred_lft forever
9: eth7: mtu 1500 qdisc noop state DOWN group
default qlen 1000
link/ether ac:1f:6b:f5:dd:3d brd ff:ff:ff:ff:ff:ff
altname enp102s0f1
altname ens7f1
10: eth8: mtu 1500 qdisc mq
master bond0 state UP group default qlen 1000
link/ether ac:1f:6b:f5:dd:3e brd ff:ff:ff:ff:ff:ff
altname enp102s0f2
altname ens7f2
11: eth9: mtu 1500 qdisc mq
master bond0 state UP group default qlen 1000
link/ether ac:1f:6b:f5:dd:3e brd ff:ff:ff:ff:ff:ff
altname enp102s0f3
altname ens7f3
12: bond0: mtu 1500 qdisc
noqueue state UP group default qlen 1000
link/ether ac:1f:6b:f5:dd:3e brd ff:ff:ff:ff:ff:ff
inet 10.2.184.2/26 brd 10.2.184.63 scope global bond0
valid_lft forever preferred_lft forever
13: bond1: mtu 1500 qdisc
noqueue state UP group default qlen 1000
link/ether ac:1f:6b:a4:e5:88 brd ff:ff:ff:ff:ff:ff
inet 10.2.184.50/26 brd 10.2.184.63 scope global bond1
valid_lft forever preferred_lft forever
Note that the SLAVE devices do not have their own IP address.
As to performance, all seems snappy. We have not exactly bench-marked
it. But we feel the speed is what we would expect from the devices
involved.
Be sure to verify that your switch works with bounding. I think that
these days most do. But do check that.
--
Roger Oberholtzer