half Windows, but the Acorns are being phased out in favour of new Windows machines - although they might be new Linux machines if I can make the school see sense :-).
You might be able to set one up as dual-boot then roll that out to all of them.
Clearly the storage requirements for the Windows software is of a different order of magnitude to the Acorn stuff, so 1Gb per child does not seem overly generous to me.
Only an order of magnitude if they use Publisher. Otherwise I, like others, find a Tb rather large, although I did two days ago delete 8GB of mp3 files from the directory of one pupil here, and there are today over twenty with more than 100MB. Drives reduce in price by 40% per annum, so it's extravagant buying now the capacity you will need in two year's time.
I'm happy setting up Linux as a Samba server to do this lot, but does anyone have experience/advice on choice of hardware for the job (I'm happy to build a machine from bits). RAID is a must, but IDE gives us a lot more Mb for our money than SCSI, but maybe at the cost of performance/reliability? Nevertheless, it seems to me that even a modest hard disc can outperform the 100MHz ethernet to which the server will be connected.
Although we now have RAID (software RAID on FreeBSD, dedicated server running NFS, Samba, Appletalk etc) I am not sure it is essential if you have good backups. In our case this is done by overnight dump/gzip scripts to a separate machine, and by September I hope there will be machines at four widely separated locations in the school spending much of every night cross-dumping to each other! You need a fast processor for the compression. Backup drives are 80GB slow IDE (5400 rpm @ 220 pounds each). It's much faster restoring from a hard drive than from tape! I think SCSI is still worth the premium for servers. SCSI is much faster even if it's the same speed because SCSI supports tagged queuing. My guru tells me the latest IBM IDE drives now support that too, BUT. Anyway, here is what he says: =========================================================== With tagged queueing, the host can send several commands (eg. read/write sector) to the drive without waiting for them to complete: the drive then executes them in whatever order is most convenient, reporting the results as each individual command completes. This is particularly effective when writing multiple sectors within a cylinder: if you supply the commands one at a time, you have to wait (on average half a revolution) for the first command's sector to appear under the read head, and then (if writing consecutive sectors) you have to get the next command in pretty quickly to ensure you don't miss the sector going past. Historical systems used to lay out the sectors on each cylinder such as to leave just the right gap to account for the processing delay (eg. in order 1,10,2,11,3,12 etc. round a track, and a similar skew between platters), but with modern drives you just don't have enough information to do this (they don't have a constant number of sectors per track for a start - tracks nearer the edge of the disc have more sectors). A method of cheating (in the write case) is for a drive to enable a write cache, and falsely report to the host that each command is complete as soon as it arrives. This gives you good scores on benchmarks, but means that the host has no idea when data is actually safe on disc. BSD Unix systems have always taken care to write out updates to directories in the right order, and wait for the command to complete before re-using those disc sectors: this is what ensures that you don't lose large numbers of files after an uncontrolled shutdown and fsck. Some IDE drives have been seen to keep sectors in cache, unwritten, for hours at a time if the drive is kept busy - such a drive will seriously corrupt the filesystem if a power failure occurs at an inconvenient moment. Using tagged queueing, you can get the same performance as the 'cheat' without sacraficing reliability. I haven't studied the IDE tagged queueing stuff in detail, but another trick available on SCSI is detach/reconnect: this allows the command, but not its data, to be sent to a drive - the drive will later re-connect to the host to get the data associated with that command, hence allowing more pending commands to be active than can fit into the drive's RAM, with the drive chosing the order in which the data arrives. I don't think that IDE can do this properly - even with tagged queueing - so with IDE you lose performance if you have more than one drive per controller. ================================================================= To summarise, on a single-user workstation the drive rarely has to do more than one operation (load file/save file) at once, so IDE is as fast as SCSI. But on a server there may be dozens of concurrent read/write operations, and this is where SCSI becomes very much faster than IDE. Higher level queuing can ameliorate but not solve this. But manufacturers keep on having new ideas! RAID 5 only requires one spare drive, so you could put a dozen on one SCSI controller and only lose one for security. -- Christopher Dawkins, Felsted School, Dunmow, Essex CM6 3JG 01371-820527 or 07798 636725 cchd@felsted.essex.sch.uk