Hi all, some general questions on the subject and a SuSE-specific one. I have a network which requires very good performance. There is a file server and several clients, each of which moves very large data (GBs) around. As long as only one client accesses the server, performance is good, but when several clients at once access the server, performance drops. As would be expected. General questions: Is it possible to improve the server's network performance by adding a second GBit ethernet card? I think a PCI card is bound by the PCI bus performance. Would a second card, if PCI, even enhance performance noticeably? If I upgrade to PCIe, or use a mainboard with onboard GBit ethernet, would that change anything? So far, I was thinking in terms of using different subnets hanging on different ethernet ports on the server. Provided that both GBit cards have full performance, if clients on different subnets both wold get the full performance of a GBit card. Am I right? Now the SuSE question. The solution with two separate subnets is, of course, somewhat suboptimal. Does SuSE Linux provide a way to use a sort of channel bundling, so that the full network can benefit on the advantages of having two network adapters on the server? How would I do that? Thanks, Helge
On Friday 18 February 2005 12:06 pm, Helge Preuss wrote:
Hi all,
General questions: Is it possible to improve the server's network performance by adding a second GBit ethernet card? I think a PCI card is bound by the PCI bus performance. Would a second card, if PCI, even enhance performance noticeably? If I upgrade to PCIe, or use a mainboard with onboard GBit ethernet, would that change anything? Depends on server architecture. If it has more than one PCI bus and the NICs are on different busses you'll get better performance depending on what else is hanging off those PCI busses (video, USB, etc). PCIx will give you better
<snip> throughput. If both NICs are on the same PCI bus then performance will improve but not a ton, maybe 125% to 150% but never 200% of a single card. Mainboard NIC(s) are on a PCI bus so you need to know if the PCI slots are also on the same bus as the mainboard NIC(s).
So far, I was thinking in terms of using different subnets hanging on different ethernet ports on the server. Provided that both GBit cards have full performance, if clients on different subnets both wold get the full performance of a GBit card. Am I right? Yes depending on system load and what else is on the PCI busses. Single GBit NIC will probably saturate a PCI bus with your users unless they take turns
Now the SuSE question. The solution with two separate subnets is, of course, somewhat suboptimal. Does SuSE Linux provide a way to use a sort of channel bundling, so that the full network can benefit on the advantages of having two network adapters on the server? How would I do that? Just talked about on this list this week... Google 'ethernet channel bonding' and there is a lot of info. Can be done at the NIC end or in combination with intelligent switches.
Thanks,
Helge
Stan
Thanks,
Now the SuSE question. The solution with two separate subnets is, of course, somewhat suboptimal. Does SuSE Linux provide a way to use a sort of channel bundling, so that the full network can benefit on the advantages of having two network adapters on the server? How would I do that?
Just talked about on this list this week... Google 'ethernet channel bonding' and there is a lot of info. Can be done at the NIC end or in combination with intelligent switches.
right now it seems the simple solution, splitting everything into two subnets, has the far better cost/benefit ratio. I have taken your hint and googled around, but all I found were rather old documents. There is a new and good HOWTO at http://okworld.maleo.net/blog/archives/2004/12/10/ethernet-channel-bonding-s... , but I still don't figure out how to connect the clients (if I don't give two adapters to every client, which is out of the question). So I think two separate subnets is what I'll do. Helge
Helge Preuss wrote:
General questions: Is it possible to improve the server's network performance by adding a second GBit ethernet card? I think a PCI card is bound by the PCI bus performance. Would a second card, if PCI, even enhance performance noticeably? If I upgrade to PCIe, or use a mainboard with onboard GBit ethernet, would that change anything?
A correctly designed mainboard connects the onboard nic directly to the chipset, so you would get better performance with a mainboard that supports 2 GBit nics onboard compared to 2 PCI nics. Sandy
On Friday 18 February 2005 21:16, Sandy Drobic wrote:
Helge Preuss wrote:
General questions: Is it possible to improve the server's network performance by adding a second GBit ethernet card? I think a PCI card is bound by the PCI bus performance. Would a second card, if PCI, even enhance performance noticeably? If I upgrade to PCIe, or use a mainboard with onboard GBit ethernet, would that change anything?
A correctly designed mainboard connects the onboard nic directly to the chipset, so you would get better performance with a mainboard that supports 2 GBit nics onboard compared to 2 PCI nics.
Sandy
Hi Helge, Sandy Are you sure the bottleneck is the NIC? (Don't get me wrong, it could very wel be the case, but..) I recall a discussion i had some years ago with a SUN-hw guru. For a certain customer i wanted to improve general response and i suggested to upgrade from 100BT to gigabit. He replied: nice gesture, but won't do much good. At SUN they performed some test and for (their 64-bit machines) the had a rule of the thumb: for every Mb you need a MHz, or the CPU can't cope. On peaks you might be able to generate enough (128MB/s) data, but not for longer periods. Adding a second NIC would not help, on the contrary... Having said that, 3 months ago, there was a review of Gb-nic's with appalling results. AFAIR pc-magazine or TOM'S review. Some couldn't even get beyond 500Mb... It made me decide to postpone an upgrade at home. Hans
Hans Witvliet wrote:
A correctly designed mainboard connects the onboard nic directly to the chipset, so you would get better performance with a mainboard that supports 2 GBit nics onboard compared to 2 PCI nics.
Sandy
Hi Helge, Sandy
Are you sure the bottleneck is the NIC? (Don't get me wrong, it could very wel be the case, but..)
Most of the time it's not. It's rather the hdd array or the controller. Especially if the files are written to the raid array. Just imagine if several clients try to write data simultaneously to the array. Naturally the overall throughput will drop. How much depends on the array. A good (read: expensive scsi raid) array will handle this a bit more graceful. Easy to test: just do some copy jobs on the array: 1 client -> read array -> /dev/null (this should be max read throughput) If it's below 100 MB/s then a second nic would be wasted. 2 clients -> read array -> /dev/null How much does performance drop? 5 clients -> read array -> /dev/null How much does performance drop? 1 client (from ram disk) -> write to array (max write throughput) 2 clients ... 5 clients ... As a comparison try this over the network with the GBit nic. If the throughput is below 70-80 MB/s you should rather change the nic/mainboard.
I recall a discussion i had some years ago with a SUN-hw guru. For a certain customer i wanted to improve general response and i suggested to upgrade from 100BT to gigabit. He replied: nice gesture, but won't do much good. At SUN they performed some test and for (their 64-bit machines) the had a rule of the thumb: for every Mb you need a MHz, or the CPU can't cope.
Can't comment on sun hardware, but imho onboard nic directly connected to the chipset is a recent development, I think Intels ICH6 and nForce 3/4 have that feature.
On peaks you might be able to generate enough (128MB/s) data, but not for longer periods. Adding a second NIC would not help, on the contrary...
A good onboard nic offers more than 100 MB/s, meaning close to the theoretical max of 125 MB/s. You won't get this with a pci nic. Recently I saw a review of some nForce 4 mainboards that performed at 110 MB/s and offered a similar ata/sata raid 0 performance. Both Asus A8N-SLE Deluxe and Gigabyte GA-K8NXP SLI provide 2 GBit nics onboard and use the sk98lin module provided by Suse 9.2.
Having said that, 3 months ago, there was a review of Gb-nic's with appalling results. AFAIR pc-magazine or TOM'S review. Some couldn't even get beyond 500Mb... It made me decide to postpone an upgrade at home.
Wouldn't surprise me. The fileservers at my company still use adaptec raid controllers and the bottle neck is definitely the raid array when lots of small files are read, or even worse, written. I also remember some reviews where the cheap GBit nics wouldn't give more that 70 MB/s while needing 40-70% cpu usage at the same time. :(( Sandy
Sandy, On Friday 18 February 2005 14:41, Sandy Drobic wrote:
...
Can't comment on sun hardware, but imho onboard nic directly connected to the chipset is a recent development, I think Intels ICH6 and nForce 3/4 have that feature.
I have a (so-far) late-model Intel board, their D865PERL. The block diagram indicates that both the 10/100 Base-T and optional gigabit Ethernet ports do indeed attach directly to the ICH and the MCH, resp. (The I/O controller hub and memory controller hub, resp. are parts of the so-called "chipset.") I find it mildly interesting that the 10/100 Base-T and gigabit interfaces do not both attach either to the MCH or the ICH but rather are split between them. The PCI bus (as well as the PATA and SATA interfaces, the USB buses and the on-board audio) are also attached to the ICH. The IEEE1394 interface is connected via the PCI bus.
...
Sandy
Randall Schulz
Randall R Schulz wrote:
Sandy,
On Friday 18 February 2005 14:41, Sandy Drobic wrote:
...
Can't comment on sun hardware, but imho onboard nic directly connected to the chipset is a recent development, I think Intels ICH6 and nForce 3/4 have that feature.
I have a (so-far) late-model Intel board, their D865PERL. The block diagram indicates that both the 10/100 Base-T and optional gigabit Ethernet ports do indeed attach directly to the ICH and the MCH, resp. (The I/O controller hub and memory controller hub, resp. are parts of the so-called "chipset.") I find it mildly interesting that the 10/100 Base-T and gigabit interfaces do not both attach either to the MCH or the ICH but rather are split between them. The PCI bus (as well as the PATA and SATA interfaces, the USB buses and the on-board audio) are also attached to the ICH. The IEEE1394 interface is connected via the PCI bus.
I do not know about Intel, but I remember reading a while back review on TomsHardware.com that nVidia's nForce2 chipsets purposely put the gigabit on the MCH to avoid bottlenecking on the ICH. It makes sense splitting them off like that - though its definitely a custom hack for the chipset folks at that point. - Richard
At 07:16 AM 19/02/2005, Sandy Drobic wrote:
Helge Preuss wrote:
General questions: Is it possible to improve the server's network performance by adding a second GBit ethernet card? I think a PCI card is bound by the PCI bus performance. Would a second card, if PCI, even enhance performance noticeably? If I upgrade to PCIe, or use a mainboard with onboard GBit ethernet, would that change anything?
A correctly designed mainboard connects the onboard nic directly to the chipset, so you would get better performance with a mainboard that supports 2 GBit nics onboard compared to 2 PCI nics.
Sandy
Agree with sandy on this, but you need to chose your motherboard carefully, many hang pci bus off the same chipset port (saves chips &$$$) Dumb ? but if their using files this big what about using IPV6 internally as it's designed for large file throughput. You have checked your Switch out? It is a 100base full-duplex and is set that way at the switch AND their cards. Be aware many 100/10 switches can't handle large packet loads at 100 / full duplex and internal bus buffers are small. I've even known them to switch back to half-duplex midstream and then back to full at the end. The same is with the cards, chase drivers that DON'T have 10base config code in. my 2 bits scsijon
participants (7)
-
Hans Witvliet
-
Helge Preuss
-
Randall R Schulz
-
Richard Mixon (qwest)
-
Sandy Drobic
-
scsijon
-
Stan Glasoe