Recommended server hardware for a LAMP server
I'm about to install my own colo server in a data center. I need to supply the hardware myself, and am free to pick any components that suit me as long as it fits in a rack (I assume a 1U form factor). Up until this time I have been running a very lowend tower server for use as a development server with only a few public sites. This will be for use mostly as a LAMP web server running content management systems; most sites will have low volume although a few are fairly high. Other than the LAMP sites, I do run a few moderate volume mailman mailing lists. I have a low budget and want to get the most bang for my buck. The network engineering folks I work with recommend HP DL320's, and they are an HP reseller and are "Windows People". They frequently buy off of the HP refurb list. The DL320 is a single processor 3ghz P4 expandable to 2gigs of RAM. My own inclination is to believe that memory is of supreme importance, and that AMD Opterons may be a better value. HP has an Opteron package (DL145) that's inexpensive with base RAM, but another gig of RAM is $1K. I've seen better deals, but the service and support HP offers is attractive. FWIW, I will have easy physical access to the server in the data center. Ideally (in my naive mind), I think I want a single Opteron that's dual capable, that exandable to many gigs of RAM. In that way, I can add RAM/Processors as needs grow and prices fall. I'd like to hear what the consensus is on the various elements of server selection; importance of various components, brand name vs. generic, Intel vs. AMD, ATA vs. SCSI, value of extended warranties, etc. TIA Rob
I'm about to install my own colo server in a data center. I need to supply the hardware myself, and am free to pick any components that suit me as long as it fits in a rack (I assume a 1U form factor).
Rob, I'd be interested to know your reason for going for colo rather than just renting a dedicated server (or a few).
This will be for use mostly as a LAMP web server running content management systems; most sites will have low volume although a few are fairly high. Other than the LAMP sites, I do run a few moderate volume mailman mailing lists. I have a low budget and want to get the most bang for my buck.
Off the top of my head - the Sun V20z with dual Opterons is cool, IMHO - but see below.
AMD Opterons may be a better value. HP has an Opteron package (DL145) that's inexpensive with base RAM, but another gig of RAM is $1K. I've seen better deals, but the service and support HP offers is attractive. FWIW, I will have easy physical access to the server in the data center.
I think it will be difficult giving you any sound advice without knowing the load on the box, the expected reliability, availability etc.
I'd like to hear what the consensus is on the various elements of server selection; importance of various components, brand name vs. generic, Intel vs. AMD, ATA vs. SCSI, value of extended warranties, etc.
My favourite setup is something along these lines: Dual server in Linux-HA. Both with mirrored IDE disks, unless the server will have lots of concurrent traffic, in which case I use SCSI. CPU and RAM will depend on the purpose, but always single CPU (dual CPU boards are less cost-effective, unless you're short on space), and 512-1024Mb RAM unless it's a database server where I'd probably opt for as much RAM as possible. Two NICs per node, each connected to a separate switch/network. I used mirrored disks in each node as the drives tend to be the first to go these days. /Per Jessen, Zürich -- http://www.spamchek.com/freetrial - getting rid of spam and virus - once and for all!
Quoting Per Jessen <per@computer.org>:
Rob, I'd be interested to know your reason for going for colo rather than just renting a dedicated server (or a few).
My employer is allowing me to mount a server in their rack; they do not have servers to rent. They have a big pipe, and the rack in in close vicinity to my desk.
I think it will be difficult giving you any sound advice without knowing the load on the box, the expected reliability, availability etc.
The load is light but will be increasing; of course I don't want it to ever go down ;). But when it comes down to it, brief periods of downtime will probably go unnoticed, but this will not be a unit I expect to be intentionally taking offline for frequent upgrades or anything. There will be paying customers, but nothing mission critical.
I'd like to hear what the consensus is on the various elements of server selection; importance of various components, brand name vs. generic, Intel vs. AMD, ATA vs. SCSI, value of extended warranties, etc.
My favourite setup is something along these lines:
Dual server in Linux-HA. Both with mirrored IDE disks, unless the server will have lots of concurrent traffic, in which case I use SCSI. CPU and RAM will depend on the purpose, but always single CPU (dual CPU boards are less cost-effective, unless you're short on space), and 512-1024Mb RAM unless it's a database server where I'd probably opt for as much RAM as possible. Two NICs per node, each connected to a separate switch/network. I used mirrored disks in each node as the drives tend to be the first to go these days.
I guess it depends on what you mean by "database server". Lots of MySQL sites, but nothing like Oracle. Thanks for your feedback. Rob
-- http://www.spamchek.com/freetrial - getting rid of spam and virus - once and for all!
-- Check the headers for your unsubscription address For additional commands send e-mail to suse-linux-e-help@suse.com Also check the archives at http://lists.suse.com Please read the FAQs: suse-linux-e-faq@suse.com
Rob wrote regarding 'Re: [SLE] Recommended server hardware for a LAMP server' on Mon, Jan 24 at 10:05:
Quoting Per Jessen <per@computer.org>:
Rob, I'd be interested to know your reason for going for colo rather than just renting a dedicated server (or a few).
My employer is allowing me to mount a server in their rack; they do not have servers to rent. They have a big pipe, and the rack in in close vicinity to my desk.
That's a good reason. :)
I think it will be difficult giving you any sound advice without knowing the load on the box, the expected reliability, availability etc.
The load is light but will be increasing; of course I don't want it to ever go down ;). But when it comes down to it, brief periods of downtime will probably go unnoticed, but this will not be a unit I expect to be intentionally taking offline for frequent upgrades or anything. There will be paying customers, but nothing mission critical.
I'd like to hear what the consensus is on the various elements of server selection; importance of various components, brand name vs. generic, Intel vs. AMD, ATA vs. SCSI, value of extended warranties, etc.
It sounds like you'd be well-served with a commodity-type machine. If I were doing this, I'd probably go out and get a dual AMD machine - choose the chip based on your available funds. You mentioned MySQL, so it's fairly important that you get enough memory to hold your most commonly used tables in memory (if possible). So, first, figure out a budget. Next, figure out how much space you'll need. If you can fit everything you need on a single ATA disk (about 200GB), run a RAID-1 with at least 2 disks. If you need more than one disk, figure on RAID-5. Then use a 3Ware controller (yeah, software RAID is good, but just spend the extra hundred bucks - for the ease of connectivity if for no other reason). The redundancy will eventually pay off when a drive fails, and you get a nice little speed boost the rest of the time. So, now you have a 64-bit PCI card and some drives. Next, figure out which processor option will allow you to buy the most RAM. It'll be a balancing act, but I'd definately go with more memory before I'd go with more processor (to a point). And get memory that supports ECC. It's slightly slower, but you won't notice, and it's nice to have that extra assurance against errors at high clock speeds, IMHO. At that point, you should be able to fill in the blanks with a cheap AGP video card, a real name-brand NIC (Intel or 3-Com are the only players, in my world - possibly two if you're feeling paranoid), and some hot-swap enclosures for those hard drives you picked out. The enclosures are cheap, and since a drive's the most likely thing to die on your box, it'll save you a pain later. I stick with brand names for NICs, hard drives (Seagate), and motherboards. I get generic video cards and memory, but I generally run memtest86 on new memory for a few days. I also get Vantec fans, as a dead/noisy fan sucks. Oh, and I'd reccomend getting a good power supply separate from the enclosure. Sure, it's another $50 or more, but it'll help you sleep better. --Danny, presuming "build it myself" is an option...
Excellent info Danny, thanks. I've built plenty of desktop and workgroup servers myself from scratch, so that doesn't scare me. But I don't have the sources. I typically buy components for workstations at those convention center shows that come through town, but that scares me for server components; I don't recall seeing 1U cases and server components at these anyway. Care to share some sources? Rob Quoting Danny Sauer <suse-linux-e.suselists@danny.teleologic.net>:
Rob wrote regarding 'Re: [SLE] Recommended server hardware for a LAMP server' on Mon, Jan 24 at 10:05:
Quoting Per Jessen <per@computer.org>:
Rob, I'd be interested to know your reason for going for colo rather than just renting a dedicated server (or a few).
My employer is allowing me to mount a server in their rack; they do not have servers to rent. They have a big pipe, and the rack in in close vicinity to my desk.
That's a good reason. :)
I think it will be difficult giving you any sound advice without knowing the load on the box, the expected reliability, availability etc.
The load is light but will be increasing; of course I don't want it to ever go down ;). But when it comes down to it, brief periods of downtime will probably go unnoticed, but this will not be a unit I expect to be intentionally taking offline for frequent upgrades or anything. There will be paying customers, but nothing mission critical.
I'd like to hear what the consensus is on the various elements of server selection; importance of various components, brand name vs. generic, Intel vs. AMD, ATA vs. SCSI, value of extended warranties, etc.
It sounds like you'd be well-served with a commodity-type machine. If I were doing this, I'd probably go out and get a dual AMD machine - choose the chip based on your available funds. You mentioned MySQL, so it's fairly important that you get enough memory to hold your most commonly used tables in memory (if possible).
So, first, figure out a budget. Next, figure out how much space you'll need. If you can fit everything you need on a single ATA disk (about 200GB), run a RAID-1 with at least 2 disks. If you need more than one disk, figure on RAID-5. Then use a 3Ware controller (yeah, software RAID is good, but just spend the extra hundred bucks - for the ease of connectivity if for no other reason). The redundancy will eventually pay off when a drive fails, and you get a nice little speed boost the rest of the time.
So, now you have a 64-bit PCI card and some drives. Next, figure out which processor option will allow you to buy the most RAM. It'll be a balancing act, but I'd definately go with more memory before I'd go with more processor (to a point). And get memory that supports ECC. It's slightly slower, but you won't notice, and it's nice to have that extra assurance against errors at high clock speeds, IMHO.
At that point, you should be able to fill in the blanks with a cheap AGP video card, a real name-brand NIC (Intel or 3-Com are the only players, in my world - possibly two if you're feeling paranoid), and some hot-swap enclosures for those hard drives you picked out. The enclosures are cheap, and since a drive's the most likely thing to die on your box, it'll save you a pain later.
I stick with brand names for NICs, hard drives (Seagate), and motherboards. I get generic video cards and memory, but I generally run memtest86 on new memory for a few days. I also get Vantec fans, as a dead/noisy fan sucks.
Oh, and I'd reccomend getting a good power supply separate from the enclosure. Sure, it's another $50 or more, but it'll help you sleep better.
--Danny, presuming "build it myself" is an option...
On Mon, 2005-01-24 at 17:52, Rob Brandt wrote:
Excellent info Danny, thanks.
I've built plenty of desktop and workgroup servers myself from scratch, so that doesn't scare me. But I don't have the sources. I typically buy components for workstations at those convention center shows that come through town, but that scares me for server components; I don't recall seeing 1U cases and server components at these anyway. Care to share some sources?
Rob
Take a look at http://www.ironsystems.com They sell rack units and will even pre-install linux for you if you like. -- Ken Schneider UNIX since 1989, linux since 1994, SuSE since 1998 * Only reply to the list please* "The day Microsoft makes something that doesn't suck is probably the day they start making vacuum cleaners." -Ernst Jan Plugge
www.eracks.com sells rackmount linux systems and laptops. Just pick your distro. CWSIV
Rob wrote regarding 'Re: [SLE] Recommended server hardware for a LAMP server' on Mon, Jan 24 at 16:52:
Excellent info Danny, thanks.
I've built plenty of desktop and workgroup servers myself from scratch, so that doesn't scare me. But I don't have the sources. I typically buy components for workstations at those convention center shows that come through town, but that scares me for server components; I don't recall seeing 1U cases and server components at these anyway. Care to share some sources?
I'm not sure about the rack-mount cases, but normally I'll either support the local computer retailer (who I know is good, and whose owner I have dealt with since he was operating out of his apartment), or semi-randomly look online. I get stuff from http://kspei.com/ frequently, and then look for other stuff at froogle.com. I'll sort by price, and pick the first one that has a name that I've heard of. Sites like resellerratings.com, etc generally have some info on online retailers, if you don't immediately trust everyone online (which shoudl be the case). I tend to get my fans and similar stuff from http://www.xpcgear.com/ --Danny, considering ordering one of those dual-layer DVD burners (I didn't know they were down to $65 already!)
Danny Sauer wrote:
So, first, figure out a budget. Next, figure out how much space you'll need. If you can fit everything you need on a single ATA disk (about 200GB), run a RAID-1 with at least 2 disks. If you need more than one disk, figure on RAID-5. Then use a 3Ware controller (yeah, software RAID is good, but just spend the extra hundred bucks - for the ease of connectivity if for no other reason).
All you get in hardware RAID over software RAID is performance. If you need performance, definitely go for a hardware RAID controller.
And get memory that supports ECC. It's slightly slower, but you won't notice, and it's nice to have that extra assurance against errors at high clock speeds, IMHO.
Unless you're buying the same assurance for the rest of the system, it's not worth it. So unless you're also getting dual fans and dual power-supplies, well, don't IMHO. As for ECC guarding you "against errors at high clock speeds" - if your components aren't stable to run at their respective clockspeeds, ECC won't save you.
some hot-swap enclosures for those hard drives you picked out. The enclosures are cheap, and since a drive's the most likely thing to die on your box, it'll save you a pain later.
Note that Linux isn't very good with hot-swapping IDE-drives. Also, if you're not overly worried about downtime, hotswap is hardly your priority. /Per Jessen, Zürich -- http://www.spamchek.com/freetrial - sign up for your free 30-day trial now!
Per wrote regarding 'Re: [SLE] Recommended server hardware for a LAMP server' on Tue, Jan 25 at 06:41:
Danny Sauer wrote:
So, first, figure out a budget. Next, figure out how much space you'll need. If you can fit everything you need on a single ATA disk (about 200GB), run a RAID-1 with at least 2 disks. If you need more than one disk, figure on RAID-5. Then use a 3Ware controller (yeah, software RAID is good, but just spend the extra hundred bucks - for the ease of connectivity if for no other reason).
All you get in hardware RAID over software RAID is performance. If you need performance, definitely go for a hardware RAID controller.
You also get the ability to hook up more drives, and each drive will generally have its own connector. So, it's easier to hook up, performs better, and has a minimal cost difference. The only thing you get with software RAID is less money spent and more obscure admin tools. And loss of the ability to hot-swap drives, supposedly.
And get memory that supports ECC. It's slightly slower, but you won't notice, and it's nice to have that extra assurance against errors at high clock speeds, IMHO.
Unless you're buying the same assurance for the rest of the system, it's not worth it. So unless you're also getting dual fans and dual power-supplies, well, don't IMHO. As for ECC guarding you "against errors at high clock speeds" - if your components aren't stable to run at their respective clockspeeds, ECC won't save you.
The perfomance difference is minimal, the price difference is minimal, and it's more reliable. This is not an excercise in building the cheapest machine possible, it's a plan for building a low-end server. Some people think it's fine to carry a high deductible on their insurance, and often that pays off, as it's never needed. When the price difference is minimal, why not get the extra assurance? Errors from electrical interference are unlikely, but over time, the likelyhood increases. Whatever, though. It's not my gamble. As far as the fans and power supplies go, well, a properly designed case *will* have redundant cooling. Redundant power supplies aren't in the same class as RAID drives, though. Granted, both can fail, but drives fail more often. On top of that, for the >99% of the time when all of the drives *are* working, there's a performance boost. There's no boost from power supplies. It's just money spent on redundency.
some hot-swap enclosures for those hard drives you picked out. The enclosures are cheap, and since a drive's the most likely thing to die on your box, it'll save you a pain later.
Note that Linux isn't very good with hot-swapping IDE-drives. Also, if you're not overly worried about downtime, hotswap is hardly your priority.
I'm sitting next to 2 machines with hotswap IDE drives, both are running Linux. One's running SuSE, the other's running Gentoo on a PPC. One has the drives hooked to a 3Ware RAID card, the other has the drives in a firewire enclosure. The firewire machine is running software RAID over 8 drives, with LVM on top of the RAID. I've hot-swapped drives in both systems, multiple times, while the system is under load, and had 0 problems. The machines did not need rebooted. It cost a whopping $10/drive to mount them in those trays, and the trays promote better airflow over the drives to boot. Money well spent, IMHO. Linux has no problem with the drives. Your motherboard adaptor might have problems with hotswapping, but then, that's one more reason to run a real RAID card. --Danny
Danny Sauer wrote:
The only thing you get with software RAID is less money spent and more obscure admin tools.
Hardware controllers all come with pretty obscure sets of admin tools, too. I've used the occasional Mylex RAID controller in the past and they certainly aren't very standard.
And loss of the ability to hot-swap drives, supposedly.
If your general hot-swap setup works, it'll work with software RAID too. raidhotadd and raidhotremove work just fine.
As far as the fans and power supplies go, well, a properly designed case *will* have redundant cooling.
The case perhaps yes, but will the power-supply and the CPU? I have an older Compaq Proliant that has dual power-supply, dual (well, quad) case/CPU fans, but doesn't automatically ship with ECC memory. I think that says something about the importance of power-supplies & fans vs. ECC memory. (or at least what Compaq thought of it :-).
I'm sitting next to 2 machines with hotswap IDE drives, both are running Linux. One's running SuSE, the other's running Gentoo on a PPC. One has the drives hooked to a 3Ware RAID card, the other has the drives in a firewire enclosure. The firewire machine is running software RAID over 8 drives, with LVM on top of the RAID. I've hot-swapped drives in both systems, multiple times, while the system is under load, and had 0 problems.
Firewire is not IDE - obviously your Firewire box understands hotswapping, but I'm impressed that the hot-swap worked for the drives connected directly over IDE. I really did not think Linux was up it. /Per Jessen, Zürich -- http://www.spamchek.com/freetrial - sign up for your free 30-day trial now!
At 04:24 AM 27/01/2005, Per Jessen wrote:
Danny Sauer wrote:
The only thing you get with software RAID is less money spent and more obscure admin tools.
cut
As far as the fans and power supplies go, well, a properly designed case *will* have redundant cooling.
The case perhaps yes, but will the power-supply and the CPU? I have an older Compaq Proliant that has dual power-supply, dual (well, quad) case/CPU fans, but doesn't automatically ship with ECC memory. I think that says something about the importance of power-supplies & fans vs. ECC memory. (or at least what Compaq thought of it :-).
from my memory of times past I don't remember Compaq ever requiring ecc memory to start with on their early machines, and always seem to have more fans (blowing through too, not just pulling air out) and an upgraded Power supply (even pre-switchmode) than normal. I was told back then that this was so they could meet the Military specs of the Pentagon. scsijon
Many thanks to those who responded to my query regarding server hardware to run SUSE 9.2 on. I'm am strongly leaning towards an HP dual Opteron model (DL145) because of spansion possibilies and my space limitation. And cost, of course. One issue concerns me though. Since SUSE amd64 is a different version than i386, what issues am I going to have with commercial packages since I can't just recompile for amd64? Is this a genuine compatibilty issue, or is it just an optimization issue? Some of the commercial packages I might want to install don't have amd64 versions. Thanks Rob Quoting Rob Brandt <bronto@csd-bes.net>:
I'm about to install my own colo server in a data center. I need to supply the hardware myself, and am free to pick any components that suit me as long as it fits in a rack (I assume a 1U form factor). Up until this time I have been running a very lowend tower server for use as a development server with only a few public sites. This will be for use mostly as a LAMP web server running content management systems; most sites will have low volume although a few are fairly high. Other than the LAMP sites, I do run a few moderate volume mailman mailing lists. I have a low budget and want to get the most bang for my buck.
The network engineering folks I work with recommend HP DL320's, and they are an HP reseller and are "Windows People". They frequently buy off of the HP refurb list. The DL320 is a single processor 3ghz P4 expandable to 2gigs of RAM. My own inclination is to believe that memory is of supreme importance, and that AMD Opterons may be a better value. HP has an Opteron package (DL145) that's inexpensive with base RAM, but another gig of RAM is $1K. I've seen better deals, but the service and support HP offers is attractive. FWIW, I will have easy physical access to the server in the data center.
Ideally (in my naive mind), I think I want a single Opteron that's dual capable, that exandable to many gigs of RAM. In that way, I can add RAM/Processors as needs grow and prices fall.
I'd like to hear what the consensus is on the various elements of server selection; importance of various components, brand name vs. generic, Intel vs. AMD, ATA vs. SCSI, value of extended warranties, etc.
TIA
Rob
-- Check the headers for your unsubscription address For additional commands send e-mail to suse-linux-e-help@suse.com Also check the archives at http://lists.suse.com Please read the FAQs: suse-linux-e-faq@suse.com
Rob Brandt wrote:
One issue concerns me though. Since SUSE amd64 is a different version than i386, arch, not version what issues am I going to have with commercial packages since I can't just recompile for amd64? Why not? Is this a genuine compatibilty issue, or is it just an optimization issue? Some of the commercial packages I might want to install don't have amd64 versions.
You can still run i386 software in amd64. If you can rebuild the source, you could rebuild on x86_64 as well. -- Joe Morris New Tribes Mission Email Address: Joe_Morris@ntm.org Registered Linux user 231871
Quoting "Joe Morris (NTM)" <Joe_Morris@ntm.org>:
Rob Brandt wrote:
One issue concerns me though. Since SUSE amd64 is a different version than i386, arch, not version what issues am I going to have with commercial packages since I can't just recompile for amd64? Why not?
They are commercial. I won't have the source.
Is this a genuine compatibilty issue, or is it just an optimization issue? Some of the commercial packages I might want to install don't have amd64 versions.
You can still run i386 software in amd64. If you can rebuild the source, you could rebuild on x86_64 as well.
They are commercial. I won't have the source. I'm unclear if I can run i386 binaries in amd64. I can't rebuild the source. Thanks Rob
-- Joe Morris New Tribes Mission Email Address: Joe_Morris@ntm.org Registered Linux user 231871
-- Check the headers for your unsubscription address For additional commands send e-mail to suse-linux-e-help@suse.com Also check the archives at http://lists.suse.com Please read the FAQs: suse-linux-e-faq@suse.com
Rob Brandt wrote:
You can still run i386 software in amd64. If you can rebuild the source, you could rebuild on x86_64 as well.
They are commercial. I won't have the source.
I'm unclear if I can run i386 binaries in amd64. I can't rebuild the source.
Yes. For example, I have an amd64 (x86_64) computer, and run the 'binary only' skype with no problem. -- Joe Morris New Tribes Mission Email Address: Joe_Morris@ntm.org Registered Linux user 231871
Thanks very much Joe, puts my mind at ease. BTW, I know Merril and Teresa, they've been in Santa Barbara a couple of times over the last few months: http://www.ntm.org/venezuela/give_missionary_details.php?missionary_id=1472&page=missionary%20details Rob Quoting "Joe Morris (NTM)" <Joe_Morris@ntm.org>:
You can still run i386 software in amd64. If you can rebuild the source, you could rebuild on x86_64 as well.
They are commercial. I won't have the source.
I'm unclear if I can run i386 binaries in amd64. I can't rebuild the
Rob Brandt wrote: source.
Yes. For example, I have an amd64 (x86_64) computer, and run the 'binary only' skype with no problem. -- Joe Morris New Tribes Mission Email Address: Joe_Morris@ntm.org Registered Linux user 231871
-- Check the headers for your unsubscription address For additional commands send e-mail to suse-linux-e-help@suse.com Also check the archives at http://lists.suse.com Please read the FAQs: suse-linux-e-faq@suse.com
There is also a list JUST for 64-bit suse. suse-amd64@suse.com I have an amd64 machine and there ARE compatibilities with compiling source meant for 32. I have been able to get around them mostly. SuSE provides some nice scripts to setup the build environment for either one. With binary's though, you shouldn't have any trouble. The system is setup where if something doesn't know about 64bit, it will only see 32 (which can also cause some trouble with installations after compiling something for 64). B-) On Saturday 29 January 2005 10:00 am, Joe Morris (NTM) wrote:
Rob Brandt wrote:
You can still run i386 software in amd64. If you can rebuild the source, you could rebuild on x86_64 as well.
They are commercial. I won't have the source.
I'm unclear if I can run i386 binaries in amd64. I can't rebuild the source.
Yes. For example, I have an amd64 (x86_64) computer, and run the 'binary only' skype with no problem. -- Joe Morris New Tribes Mission Email Address: Joe_Morris@ntm.org Registered Linux user 231871
participants (8)
-
Brad Bourn
-
Carl William Spitzer IV
-
Danny Sauer
-
Joe Morris (NTM)
-
Ken Schneider
-
Per Jessen
-
Rob Brandt
-
scsijon