Hans Witvliet wrote:
A correctly designed mainboard connects the onboard nic directly to the chipset, so you would get better performance with a mainboard that supports 2 GBit nics onboard compared to 2 PCI nics.
Sandy
Hi Helge, Sandy
Are you sure the bottleneck is the NIC? (Don't get me wrong, it could very wel be the case, but..)
Most of the time it's not. It's rather the hdd array or the controller. Especially if the files are written to the raid array. Just imagine if several clients try to write data simultaneously to the array. Naturally the overall throughput will drop. How much depends on the array. A good (read: expensive scsi raid) array will handle this a bit more graceful. Easy to test: just do some copy jobs on the array: 1 client -> read array -> /dev/null (this should be max read throughput) If it's below 100 MB/s then a second nic would be wasted. 2 clients -> read array -> /dev/null How much does performance drop? 5 clients -> read array -> /dev/null How much does performance drop? 1 client (from ram disk) -> write to array (max write throughput) 2 clients ... 5 clients ... As a comparison try this over the network with the GBit nic. If the throughput is below 70-80 MB/s you should rather change the nic/mainboard.
I recall a discussion i had some years ago with a SUN-hw guru. For a certain customer i wanted to improve general response and i suggested to upgrade from 100BT to gigabit. He replied: nice gesture, but won't do much good. At SUN they performed some test and for (their 64-bit machines) the had a rule of the thumb: for every Mb you need a MHz, or the CPU can't cope.
Can't comment on sun hardware, but imho onboard nic directly connected to the chipset is a recent development, I think Intels ICH6 and nForce 3/4 have that feature.
On peaks you might be able to generate enough (128MB/s) data, but not for longer periods. Adding a second NIC would not help, on the contrary...
A good onboard nic offers more than 100 MB/s, meaning close to the theoretical max of 125 MB/s. You won't get this with a pci nic. Recently I saw a review of some nForce 4 mainboards that performed at 110 MB/s and offered a similar ata/sata raid 0 performance. Both Asus A8N-SLE Deluxe and Gigabyte GA-K8NXP SLI provide 2 GBit nics onboard and use the sk98lin module provided by Suse 9.2.
Having said that, 3 months ago, there was a review of Gb-nic's with appalling results. AFAIR pc-magazine or TOM'S review. Some couldn't even get beyond 500Mb... It made me decide to postpone an upgrade at home.
Wouldn't surprise me. The fileservers at my company still use adaptec raid controllers and the bottle neck is definitely the raid array when lots of small files are read, or even worse, written. I also remember some reviews where the cheap GBit nics wouldn't give more that 70 MB/s while needing 40-70% cpu usage at the same time. :(( Sandy