On 2020/03/27 04:42, Andrei Borzenkov wrote:
27.03.2020 14:11, Per Jessen пишет:
Roger Oberholtzer wrote:
We are setting up a cluster of computers that will be connected via a 10 GB network. The switch we are considering is something like https://www.netgear.com/business/products/switches/smart/XS748T.aspx The sales blurb says: Up to eight 10-Gigabit Ethernet links can be aggregated into a virtual 80‑Gbps connection
This is bandwidth, not speed. Each single flow between two systems is still limited to 10Gbps, but with some luck you may have multiple flows at the same time utilizing multiple physical links.
---- It depends if it is done where overflow rolls over to other channels, but teaming or bonding has a RAID 0 mode, that is, BTW, rather CPU intensive.
In openSUSE, would this be related to bonding slaves?
Yes. New kid on the block is team driver.
LACP is quite old, I wonder if Netgear is providing the hardware version, i.e. let the switch do the work? We have had network interface bonding in Linux for at least 10-12 years, also with LACP. We are using it, but I don't recall needing anything special in the switches. (I didn't do the setup).
Well, that's the rub.... I setup 2 Intel 10G cards with 2 ports each between two computers -- they did need a direct connect. And if you want to go between more than 2 computers, you need to be able to route the conversation packets that are split between cables through a capable router -- usually more than a home budget wants to see when I last priced.
Switches must support LACP and LACP must be configured in advance, and configuration must match between two partners otherwise no connection is possible. If available, LACP is preferable to static aggregation (etherchannel) because LACP actively probes each physical link and can detect problems beyond simple physical link failure.
Linux also offers other bond modes that do not require special switch support/setup. Depending on exact requirements they may be sufficient for load distribution across multiple links.
The goal would be that one computer (primarily a file server type of thing) has a high bandwidth stream for reading and writing data to the switch, where numerous other computers read and write via their single cable. This server would have, say, 4 10 GBit Ethernet ports connected to the switch. Ideally these would appear to be a single network.
Yup, sounds about right - that is how our storage servers are connected to the switch. Only 4 x 1Gb though. I would also look at being able to feed such a network - i.e. make sure you have the IO devices with enough bandwidth too as well as sufficient bandwidth on the interconnect between IO devices and network device(s).
I eventually gave up on trying to aggregate 2x10 due to the cpu overhead making it, at best, no faster than a single 1x10. Even with Jumbo Packets at 9k (which all talking computers on the same physical net would to be capable of running), 10g was close to the limit with CIFS READ ~600MB/s and write @ around low 300's, well, rt now, its 635MB/s (the M is same base as
---- That is more like a failover in which you won't get the parallel bandwidth. the 'B', i.e. base 2) read and 240MG write w/Win7x64 client(3.2GH Xeon CPU), and linux server w/3.33GHz. CPU's. Of note, that's network speed without file-io. That slows it a bit even though locally disk is about 1-1.2GB/s (max linear r/w using direct i/o). -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org