Per wrote regarding 'Re: [SLE] Recommended server hardware for a LAMP server' on Tue, Jan 25 at 06:41:
Danny Sauer wrote:
So, first, figure out a budget. Next, figure out how much space you'll need. If you can fit everything you need on a single ATA disk (about 200GB), run a RAID-1 with at least 2 disks. If you need more than one disk, figure on RAID-5. Then use a 3Ware controller (yeah, software RAID is good, but just spend the extra hundred bucks - for the ease of connectivity if for no other reason).
All you get in hardware RAID over software RAID is performance. If you need performance, definitely go for a hardware RAID controller.
You also get the ability to hook up more drives, and each drive will generally have its own connector. So, it's easier to hook up, performs better, and has a minimal cost difference. The only thing you get with software RAID is less money spent and more obscure admin tools. And loss of the ability to hot-swap drives, supposedly.
And get memory that supports ECC. It's slightly slower, but you won't notice, and it's nice to have that extra assurance against errors at high clock speeds, IMHO.
Unless you're buying the same assurance for the rest of the system, it's not worth it. So unless you're also getting dual fans and dual power-supplies, well, don't IMHO. As for ECC guarding you "against errors at high clock speeds" - if your components aren't stable to run at their respective clockspeeds, ECC won't save you.
The perfomance difference is minimal, the price difference is minimal, and it's more reliable. This is not an excercise in building the cheapest machine possible, it's a plan for building a low-end server. Some people think it's fine to carry a high deductible on their insurance, and often that pays off, as it's never needed. When the price difference is minimal, why not get the extra assurance? Errors from electrical interference are unlikely, but over time, the likelyhood increases. Whatever, though. It's not my gamble. As far as the fans and power supplies go, well, a properly designed case *will* have redundant cooling. Redundant power supplies aren't in the same class as RAID drives, though. Granted, both can fail, but drives fail more often. On top of that, for the >99% of the time when all of the drives *are* working, there's a performance boost. There's no boost from power supplies. It's just money spent on redundency.
some hot-swap enclosures for those hard drives you picked out. The enclosures are cheap, and since a drive's the most likely thing to die on your box, it'll save you a pain later.
Note that Linux isn't very good with hot-swapping IDE-drives. Also, if you're not overly worried about downtime, hotswap is hardly your priority.
I'm sitting next to 2 machines with hotswap IDE drives, both are running Linux. One's running SuSE, the other's running Gentoo on a PPC. One has the drives hooked to a 3Ware RAID card, the other has the drives in a firewire enclosure. The firewire machine is running software RAID over 8 drives, with LVM on top of the RAID. I've hot-swapped drives in both systems, multiple times, while the system is under load, and had 0 problems. The machines did not need rebooted. It cost a whopping $10/drive to mount them in those trays, and the trays promote better airflow over the drives to boot. Money well spent, IMHO. Linux has no problem with the drives. Your motherboard adaptor might have problems with hotswapping, but then, that's one more reason to run a real RAID card. --Danny