It *is* best to keep domains relatively locked in on themselves, and this helps keep the 80% traffic local and only 20% traffic external: this reduces broadcasts to outside segments and helps local (i.e language lab machines) utilise the bandwidth better. If you *were* to amalgamate these machines onto one server, then IDE RAID may not work too well. The amount of disk access can seriously screw around with your hardware's ability to operate properly and the MTBF could seriously drop..but cost is obviously important there- so that's your decision. Your language lab will need a good switch with excellent circuitry, and perhaps instead of separate servers you may consider VLANs? A more expensive switch with VLAN capability can offer far more granular control over network traffic than most other options..it may be worth looking into that. Anyway, you are right that your multimedia accessing school users will need some good bandwidth, and if it's any consolation to this discussion, I worked in a school where the language lab had a gigabit connection via fibre to the main PDC in the server room (one building away), and still had some useage problems at times (high yield times such as early morning logons, which we solved by logging the machines on remotely in stages before registration ;-) Our situation was similar in that 250 machines were used in the language/design block, and so we set up a local workgroup server, answerable to a PDC running NT 4 Server that had quad Xeon processors and 512MB RAM! The local server was just a PIII 450 at the time with 512MB RAM, and *seemed* to run ok. SCSI was used in both though. The switch used was a CISCO 1900 Catalyst, and this being manageable remotely via telnet helped in admin from a central location. The switch cost £400 second-hand. Anyway, let us know how the whole thing goes--I'm interested at least! Take care, Paul, CCNA
In an earlier post you said that the CPU activity was nearly always less than 10% on a dual processor server. I can understand say a thin client server having multiple processors but is it really likely to be a limit in a server primarily kicking out files? I would have thought the NIC would be the limit so use a card with 4 trunked connectors. D-Link do one around £150 IIRC. Ok that uses 4 switch ports but it gives 800 meg duplex and its much less expensive that Gigabit. You can get switches that support VLAN for under £300 new so you could have say an Athlon server connected to a 24 port switch such that 20 machines share that server. Then you get 20 machines effectively sharing a 800 Mbits duplex pipe to the server. If you link the switches using port trunking you than get more band width between them at the expense of ports but since these switches are cheap, it could well be as cost effective to link them together in this way. Best to work out bandwidth demands first then find the least expensive way of meeting them. regards, Ian ----- Original Message ----- From: Paul Munro To: suse-linux-uk-schools@suse.com Sent: Monday, July 02, 2001 2:43 PM Subject: [suse-linux-uk-schools] Re: Network Design It *is* best to keep domains relatively locked in on themselves, and this helps keep the 80% traffic local and only 20% traffic external: this reduces broadcasts to outside segments and helps local (i.e language lab machines) utilise the bandwidth better. If you *were* to amalgamate these machines onto one server, then IDE RAID may not work too well. The amount of disk access can seriously screw around with your hardware's ability to operate properly and the MTBF could seriously drop..but cost is obviously important there- so that's your decision. Your language lab will need a good switch with excellent circuitry, and perhaps instead of separate servers you may consider VLANs? A more expensive switch with VLAN capability can offer far more granular control over network traffic than most other options..it may be worth looking into that. Anyway, you are right that your multimedia accessing school users will need some good bandwidth, and if it's any consolation to this discussion, I worked in a school where the language lab had a gigabit connection via fibre to the main PDC in the server room (one building away), and still had some useage problems at times (high yield times such as early morning logons, which we solved by logging the machines on remotely in stages before registration ;-) Our situation was similar in that 250 machines were used in the language/design block, and so we set up a local workgroup server, answerable to a PDC running NT 4 Server that had quad Xeon processors and 512MB RAM! The local server was just a PIII 450 at the time with 512MB RAM, and *seemed* to run ok. SCSI was used in both though. The switch used was a CISCO 1900 Catalyst, and this being manageable remotely via telnet helped in admin from a central location. The switch cost £400 second-hand. Anyway, let us know how the whole thing goes--I'm interested at least! Take care, Paul, CCNA
participants (2)
-
Ian Lynch
-
Paul Munro