[opensuse] network bonding slaves
We are setting up a cluster of computers that will be connected via a 10 GB network. The switch we are considering is something like https://www.netgear.com/business/products/switches/smart/XS748T.aspx The sales blurb says: Up to eight 10-Gigabit Ethernet links can be aggregated into a virtual 80‑Gbps connection In openSUSE, would this be related to bonding slaves? I have never used this. The goal would be that one computer (primarily a file server type of thing) has a high bandwidth stream for reading and writing data to the switch, where numerous other computers read and write via their single cable. This server would have, say, 4 10 GBit Ethernet ports connected to the switch. Ideally these would appear to be a single network. Am I thinking correctly? Anyone have any advice on this kind of thing? We are targeting an openSUSE Leap system for this server. -- Roger Oberholtzer -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Roger Oberholtzer wrote:
We are setting up a cluster of computers that will be connected via a 10 GB network.
The switch we are considering is something like https://www.netgear.com/business/products/switches/smart/XS748T.aspx
The sales blurb says:
Up to eight 10-Gigabit Ethernet links can be aggregated into a virtual 80‑Gbps connection
In openSUSE, would this be related to bonding slaves?
LACP is quite old, I wonder if Netgear is providing the hardware version, i.e. let the switch do the work? We have had network interface bonding in Linux for at least 10-12 years, also with LACP. We are using it, but I don't recall needing anything special in the switches. (I didn't do the setup).
The goal would be that one computer (primarily a file server type of thing) has a high bandwidth stream for reading and writing data to the switch, where numerous other computers read and write via their single cable. This server would have, say, 4 10 GBit Ethernet ports connected to the switch. Ideally these would appear to be a single network.
Yup, sounds about right - that is how our storage servers are connected to the switch. Only 4 x 1Gb though. I would also look at being able to feed such a network - i.e. make sure you have the IO devices with enough bandwidth too as well as sufficient bandwidth on the interconnect between IO devices and network device(s). -- Per Jessen, Zürich (7.4°C) http://www.hostsuisse.com/ - dedicated server rental in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
27.03.2020 14:11, Per Jessen пишет:
Roger Oberholtzer wrote:
We are setting up a cluster of computers that will be connected via a 10 GB network.
The switch we are considering is something like https://www.netgear.com/business/products/switches/smart/XS748T.aspx
The sales blurb says:
Up to eight 10-Gigabit Ethernet links can be aggregated into a virtual 80‑Gbps connection
This is bandwidth, not speed. Each single flow between two systems is still limited to 10Gbps, but with some luck you may have multiple flows at the same time utilizing multiple physical links.
In openSUSE, would this be related to bonding slaves?
Yes. New kid on the block is team driver.
LACP is quite old, I wonder if Netgear is providing the hardware version, i.e. let the switch do the work? We have had network interface bonding in Linux for at least 10-12 years, also with LACP. We are using it, but I don't recall needing anything special in the switches. (I didn't do the setup).
Switches must support LACP and LACP must be configured in advance, and configuration must match between two partners otherwise no connection is possible. If available, LACP is preferable to static aggregation (etherchannel) because LACP actively probes each physical link and can detect problems beyond simple physical link failure. Linux also offers other bond modes that do not require special switch support/setup. Depending on exact requirements they may be sufficient for load distribution across multiple links.
The goal would be that one computer (primarily a file server type of thing) has a high bandwidth stream for reading and writing data to the switch, where numerous other computers read and write via their single cable. This server would have, say, 4 10 GBit Ethernet ports connected to the switch. Ideally these would appear to be a single network.
Yup, sounds about right - that is how our storage servers are connected to the switch. Only 4 x 1Gb though. I would also look at being able to feed such a network - i.e. make sure you have the IO devices with enough bandwidth too as well as sufficient bandwidth on the interconnect between IO devices and network device(s).
-- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 2020/03/27 04:42, Andrei Borzenkov wrote:
27.03.2020 14:11, Per Jessen пишет:
Roger Oberholtzer wrote:
We are setting up a cluster of computers that will be connected via a 10 GB network. The switch we are considering is something like https://www.netgear.com/business/products/switches/smart/XS748T.aspx The sales blurb says: Up to eight 10-Gigabit Ethernet links can be aggregated into a virtual 80‑Gbps connection
This is bandwidth, not speed. Each single flow between two systems is still limited to 10Gbps, but with some luck you may have multiple flows at the same time utilizing multiple physical links.
---- It depends if it is done where overflow rolls over to other channels, but teaming or bonding has a RAID 0 mode, that is, BTW, rather CPU intensive.
In openSUSE, would this be related to bonding slaves?
Yes. New kid on the block is team driver.
LACP is quite old, I wonder if Netgear is providing the hardware version, i.e. let the switch do the work? We have had network interface bonding in Linux for at least 10-12 years, also with LACP. We are using it, but I don't recall needing anything special in the switches. (I didn't do the setup).
Well, that's the rub.... I setup 2 Intel 10G cards with 2 ports each between two computers -- they did need a direct connect. And if you want to go between more than 2 computers, you need to be able to route the conversation packets that are split between cables through a capable router -- usually more than a home budget wants to see when I last priced.
Switches must support LACP and LACP must be configured in advance, and configuration must match between two partners otherwise no connection is possible. If available, LACP is preferable to static aggregation (etherchannel) because LACP actively probes each physical link and can detect problems beyond simple physical link failure.
Linux also offers other bond modes that do not require special switch support/setup. Depending on exact requirements they may be sufficient for load distribution across multiple links.
The goal would be that one computer (primarily a file server type of thing) has a high bandwidth stream for reading and writing data to the switch, where numerous other computers read and write via their single cable. This server would have, say, 4 10 GBit Ethernet ports connected to the switch. Ideally these would appear to be a single network.
Yup, sounds about right - that is how our storage servers are connected to the switch. Only 4 x 1Gb though. I would also look at being able to feed such a network - i.e. make sure you have the IO devices with enough bandwidth too as well as sufficient bandwidth on the interconnect between IO devices and network device(s).
I eventually gave up on trying to aggregate 2x10 due to the cpu overhead making it, at best, no faster than a single 1x10. Even with Jumbo Packets at 9k (which all talking computers on the same physical net would to be capable of running), 10g was close to the limit with CIFS READ ~600MB/s and write @ around low 300's, well, rt now, its 635MB/s (the M is same base as
---- That is more like a failover in which you won't get the parallel bandwidth. the 'B', i.e. base 2) read and 240MG write w/Win7x64 client(3.2GH Xeon CPU), and linux server w/3.33GHz. CPU's. Of note, that's network speed without file-io. That slows it a bit even though locally disk is about 1-1.2GB/s (max linear r/w using direct i/o). -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Fri, Mar 27, 2020 at 12:11 PM Per Jessen <per@computer.org> wrote:
Roger Oberholtzer wrote:
In openSUSE, would this be related to bonding slaves?
LACP is quite old, I wonder if Netgear is providing the hardware version, i.e. let the switch do the work? We have had network interface bonding in Linux for at least 10-12 years, also with LACP. We are using it, but I don't recall needing anything special in the switches. (I didn't do the setup).
The switch I referenced said it supported IEEE 802.3ad - LAGs (LACP). I am curious how the switch could manage this. That is, how would the server know to send out data on the various interfaces when they are aggregated later? I would imagine that the switch would assign an address to the aggregated ports. It's how the server knows this and acts accordingly that I'm curious about. At the end of the day, I would just want to ensure that openSUSE LEAP would be able to use this feature without anything strange required. https://en.wikipedia.org/wiki/Link_aggregation#Linux_bonding_driver seems to imply that the linux bonding driver is a software version of this. The switch would, it seems, do something else. So it is an either/or thing. Perhaps it is best if the switch be let do it. Or? -- Roger Oberholtzer -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Roger Oberholtzer wrote:
On Fri, Mar 27, 2020 at 12:11 PM Per Jessen <per@computer.org> wrote:
Roger Oberholtzer wrote:
In openSUSE, would this be related to bonding slaves?
LACP is quite old, I wonder if Netgear is providing the hardware version, i.e. let the switch do the work? We have had network interface bonding in Linux for at least 10-12 years, also with LACP. We are using it, but I don't recall needing anything special in the switches. (I didn't do the setup).
The switch I referenced said it supported IEEE 802.3ad - LAGs (LACP).
I am curious how the switch could manage this. That is, how would the server know to send out data on the various interfaces when they are aggregated later? I would imagine that the switch would assign an address to the aggregated ports. It's how the server knows this and acts accordingly that I'm curious about.
Me too.
https://en.wikipedia.org/wiki/Link_aggregation#Linux_bonding_driver seems to imply that the linux bonding driver is a software version of this.
Yes, that is also how I understand it. I guess we're not using LACP in our setup. -- Per Jessen, Zürich (10.0°C) http://www.hostsuisse.com/ - dedicated server rental in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Fri, 27 Mar 2020 13:38:19 +0100 Per Jessen <per@computer.org> wrote:
Roger Oberholtzer wrote:
On Fri, Mar 27, 2020 at 12:11 PM Per Jessen <per@computer.org> wrote:
Roger Oberholtzer wrote:
In openSUSE, would this be related to bonding slaves?
LACP is quite old, I wonder if Netgear is providing the hardware version, i.e. let the switch do the work? We have had network interface bonding in Linux for at least 10-12 years, also with LACP. We are using it, but I don't recall needing anything special in the switches. (I didn't do the setup).
The switch I referenced said it supported IEEE 802.3ad - LAGs (LACP).
I am curious how the switch could manage this. That is, how would the server know to send out data on the various interfaces when they are aggregated later? I would imagine that the switch would assign an address to the aggregated ports. It's how the server knows this and acts accordingly that I'm curious about.
Me too.
Dunno whether there are any answers, but the most likely looking links I see are: https://www.kernel.org/doc/Documentation/networking/bonding.txt https://kb.netgear.com/000051185/What-are-link-aggregation-and-LACP-and-how-...
https://en.wikipedia.org/wiki/Link_aggregation#Linux_bonding_driver seems to imply that the linux bonding driver is a software version of this.
Yes, that is also how I understand it. I guess we're not using LACP in our setup.
-- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 03/27/2020 08:03 AM, Dave Howorth wrote:
On Fri, 27 Mar 2020 13:38:19 +0100 Per Jessen <per@computer.org> wrote:
Roger Oberholtzer wrote:
On Fri, Mar 27, 2020 at 12:11 PM Per Jessen <per@computer.org> wrote: <snip>
Dunno whether there are any answers, but the most likely looking links I see are: https://www.kernel.org/doc/Documentation/networking/bonding.txt https://kb.netgear.com/000051185/What-are-link-aggregation-and-LACP-and-how-...
https://en.wikipedia.org/wiki/Link_aggregation#Linux_bonding_driver seems to imply that the linux bonding driver is a software version of this.
Yes, that is also how I understand it. I guess we're not using LACP in our setup.
And another link that looked useful: https://www.tecmint.com/configure-network-bonding-teaming-in-ubuntu/ -- David C. Rankin, J.D.,P.E. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
participants (6)
-
Andrei Borzenkov
-
Dave Howorth
-
David C. Rankin
-
L A Walsh
-
Per Jessen
-
Roger Oberholtzer