On Mon 07 Jul, Dave Williams wrote:
Having thought about this a little more 2 more questions came to mind.
1) If a Gigabit NIC is connected through a PCI slot isn't the PCI bus going to be a bottleneck?
2) Is it possible to configure multiple 100Mbit NICs and boost I/O speeds. The only problem with this is how to configure IP addresses and to make the multiple NICs transparent so that our 300+ machines use the same server through different NICs?
I think you have to distinguish between bits per second and bytes (or words) per second. A ISA bus running at 33MHz will move 33 x 2 x 8 bytes per second = 528 bits per second which is more than enough to keep a 100Mhz card running - and even a gigbit card will get nowhere like 1000M bits per second anyhow. A PCI bus is 32 bit - and will move twice that data (albeit contending with other devices) Most gigabit cards have 64 bit extended PCI socket - although they will usually work in normal PCI slots. I have an early gigabit card - which despite its 64 bit socket only operates at 32 bits! - firmware problems apparently! Multiple cards were the way to go for some - although I never tried this. So you have SAMBA running through one to provide file sharing, printing services through another, etc. In reaslity SCSI (3) would need to be in a large raid before it began to saturate things - - and even then only with large contiguous files. just my guesstimated opinions of course. (often not correct) -- Alan Davies Head of Computing Birkenhead School