[opensuse] Software IDE RAID versus hardware IDE RAID
Hello everyone. I have an old server with an IDE RAID card in it on which I'd like to install openSUSE 10.3. The problem is, support for said RAID card (a Dell CERC ATA/100) has been discontinued for some time now. With that I ask, how feasible and/or advisable would it be to attempt the following? 1) Install openSUSE and configure it to use software RAID (two mirrored IDE drives (master/slave) for system, two stripped IDE drives (master/slave) for data) 2) Enable Encrypted File System (EFS) Basically, I'm worried that an IDE based machine will be painfully slow due to the high disk activity. Ideally I would use the machine's existing SCSI U320 interface, but the required hard drives are just too expensive. Since the machine has two 2.4GHz Xeon processors in it, couldn't I designate one to do nothing but RAID and encryption, thereby leaving the other processor free to do everything else, such as running virtual machines? If so, can anyone point me in the direction of a good online how-to? Thank you for your collective time. John -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On Tue, Apr 1, 2008 at 10:40 AM, John Bown
Hello everyone. I have an old server with an IDE RAID card in it on which I'd like to install openSUSE 10.3. The problem is, support for said RAID card (a Dell CERC ATA/100) has been discontinued for some time now. With that I ask, how feasible and/or advisable would it be to attempt the following?
Linux typically does not drop support for a very long time. Effectively if there is even just a couple users they tend to keep the source in the kernel. I would not be one bit surprised to find there is a module that supports your hardware raid. But, at least on new hardware linux raid is very efficient, and you are just doing raid-1 which takes very few resources anyway.
1) Install openSUSE and configure it to use software RAID (two mirrored IDE drives (master/slave) for system, two stripped IDE drives (master/slave) for data)
I guess you just have 2 IDE channels? If so, you want to keep your pairs together a little differently I suggest. I would go with system (master / master), data (slave / slave). That way when you are doing writes to only the data drive you get to use both channels simultaneously. If you have both on the same channel, you have to perform writes sequentially. (ie. half the speed).
2) Enable Encrypted File System (EFS)
That is you performance killer I suspect.
Basically, I'm worried that an IDE based machine will be painfully slow due to the high disk activity. Ideally I would use the machine's existing SCSI U320 interface, but the required hard drives are just too expensive.
If you stick to one drive per channel, IDE is not so horrible. The next step up is SATA. If you stick to software raid, you should be able to get a PCI sata controller at a reasonable cost.
Since the machine has two 2.4GHz Xeon processors in it, couldn't I designate one to do nothing but RAID and encryption, thereby leaving the other processor free to do everything else, such as running virtual machines? If so, can anyone point me in the direction of a good online how-to?
I suspect that will just happen. If the encryption you are using is kernel based I'm afraid I don't know how to pin it to one cpu only. If your using a userspace encryption (like the fuser based encryption encfs) then you should have no problem pinning it to one cpu. Greg -- Greg Freemyer Litigation Triage Solutions Specialist http://www.linkedin.com/in/gregfreemyer First 99 Days Litigation White Paper - http://www.norcrossgroup.com/forms/whitepapers/99%20Days%20whitepaper.pdf The Norcross Group The Intersection of Evidence & Technology http://www.norcrossgroup.com -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On Tue, Apr 1, 2008 at 7:40 AM, John Bown
Hello everyone. I have an old server with an IDE RAID card in it on
Basically, I'm worried that an IDE based machine will be painfully slow
I doubt this... With Raid 1, writes are done in parallel, but reads go to which ever drive is free, so it ends up being a tad faster in most cases.
Since the machine has two 2.4GHz Xeon processors in it, couldn't I designate one to do nothing but RAID and encryption
No you can't. (Well technically you probably could set affinity on some processes as long as they did not run in kernel space). This is a fool's errand. Install the default kernel, (which is also the SMP kernel specific to your processors and step away from the console. Linux will handle multiple processors way better than you could. Software raid is so resource UN-intensive that you will never even notice it running. Now, about that raid card.... Often (without me doing any research on that specific card) you will find a way to disable the on-board raid controller (a jumper) leaving you with just a multi-channel IDE card which is perfect for building software raid. Mirrored software raid will probably outperform the card's (fairly whimpy) prorcessor anyway, and it has the advantage of dual processors as opposed to the single processor on the raid card. I've used a lot of ide raid cards over the years, and I used all of them this way. (Disable the on-board raid software). The performance is more than acceptable. -- ----------JSA--------- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
John Bown wrote:
Hello everyone. I have an old server with an IDE RAID card in it on which I'd like to install openSUSE 10.3. The problem is, support for said RAID card (a Dell CERC ATA/100) has been discontinued for some time now. With that I ask, how feasible and/or advisable would it be to attempt the following?
1) Install openSUSE and configure it to use software RAID (two mirrored IDE drives (master/slave) for system, two stripped IDE drives (master/slave) for data) 2) Enable Encrypted File System (EFS)
Basically, I'm worried that an IDE based machine will be painfully slow due to the high disk activity. Ideally I would use the machine's existing SCSI U320 interface, but the required hard drives are just too expensive.
Since the machine has two 2.4GHz Xeon processors in it, couldn't I designate one to do nothing but RAID and encryption, thereby leaving the other processor free to do everything else, such as running virtual machines? If so, can anyone point me in the direction of a good online how-to?
Thank you for your collective time.
John
Just use software raid it works great. I have 3 servers using it presently without any complaints. You won't notice any overhead on any modern processor with minimal ram. P-III 800 is fine. Advantages: The partitions and format are recognizable under any linux system. Therefore, if you have a controller or other hardware problem, you are not stuck with disk in a proprietary format. Simply stick the disk in another box with another disk and rebuild the raid array regardless of hardware. The 10.3 yast partitioner makes it simple to set up. Just partition each disk as you like (/ , /boot , /home are the default) and select dm-raid as the filesystem type (instead of ext3, etc.) for each partition and your done. -- David C. Rankin, J.D., P.E. Rankin Law Firm, PLLC 510 Ochiltree Street Nacogdoches, Texas 75961 Telephone: (936) 715-9333 Facsimile: (936) 715-9339 www.rankinlawfirm.com -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
John Bown wrote:
Hello everyone. I have an old server with an IDE RAID card in it on which I'd like to install openSUSE 10.3. The problem is, support for said RAID card (a Dell CERC ATA/100) has been discontinued for some time now. With that I ask, how feasible and/or advisable would it be to attempt the following?
1) Install openSUSE and configure it to use software RAID (two mirrored IDE drives (master/slave) for system, two stripped IDE drives (master/slave) for data) 2) Enable Encrypted File System (EFS)
Basically, I'm worried that an IDE based machine will be painfully slow due to the high disk activity. Ideally I would use the machine's existing SCSI U320 interface, but the required hard drives are just too expensive.
Since the machine has two 2.4GHz Xeon processors in it, couldn't I designate one to do nothing but RAID and encryption, thereby leaving the other processor free to do everything else, such as running virtual machines? If so, can anyone point me in the direction of a good online how-to?
Thank you for your collective time.
John
Just use software raid it works great. I have 3 servers using it presently without any complaints. You won't notice any overhead on any modern processor with minimal ram. P-III 800 is fine. Advantages:
The partitions and format are recognizable under any linux system. Therefore, if you have a controller or other hardware problem, you are not stuck with disk in a proprietary format. Simply stick the disk in another box with another disk and rebuild the raid array regardless of hardware.
The 10.3 yast partitioner makes it simple to set up. Just partition each disk as you like (/ , /boot , /home are the default) and select dm-raid as the filesystem type (instead of ext3, etc.) for each partition and your done.
I've always felt software raid was too much of a tax on the system, not to mention less reliable (regardless of OS). I've always preferred hardware raid. We use Escalade SATA RAID cards (RAID level 1) in our SUSE servers because the RAID cards have onboard processors, so the OS doesn't need to know the RAID is even there (no need for RAID drivers). They work really, really well... and that saves on CPU cycles. I'm pretty sure there are EIDE Escalade cards, but you'd have to look. I think AMCC bought out 3Ware (the manufacturer), if I recall correctly. But your budget may not allow for such purchases. If you opt to do the software raid, the Yast partitioner should keep things fairly straight forward. I've never done it, so I can't tell you how it works. Jason -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On Tue, Apr 1, 2008 at 6:48 PM, Jason Bailey, Sun Advocate Webmaster
I've always felt software raid was too much of a tax on the system, not to mention less reliable (regardless of OS).
Wrong on both counts. So wrong that merely stating that indicates you have never even used software raid. You just went out and spent the money, and never looked back. I manage several servers all of them loafing along at under 2% utilization under heavy data access traffic from a multitude of work stations. That's 2% total system utilization for both samba and RAID combined. Software raid is fast, and lightweight. With a raid card your system never gets any faster than that card. Beef up the server, and its still as slow as the card. Software raid improves with each server upgrade. And if ever have one disk crash or a controller fail with hardware raid you have to go find another card to match, because the remaining raid drives are recorded using a proprietary scheme that is not transportable. With software raid you can mix scsi, ide, sata drives, and actually gain performance by doing so. With hardware raid you are locked into a specific type of disk, and usually a specific SIZE of disk. As for less reliable, I've had disks fail over time 8 years using software raid, but regardless of raid type (1 or 5) I've never lost any data. And when drives did fail, recovery of the array was as simple as installing the replacement disk (or inserting a new hot-spare). Raid rebuild happened totally in the back ground without any down time beyond what was necessary for the actual disk swap. In one server, with hot removable drives, a drive swap on the fly was a total non-event. The users never knew it happened. -- ----------JSA--------- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On Tue, Apr 1, 2008 at 6:48 PM, Jason Bailey, Sun Advocate Webmaster
wrote: I've always felt software raid was too much of a tax on the system, not to mention less reliable (regardless of OS).
Wrong on both counts.
Not so fast... just hold on there.
So wrong that merely stating that indicates you have never even used software raid.
Not so. I have used software RAID on many systems, just not on a SUSE Linux machine. I've simply never used YaST to set it up. I have mixed feelings about software raid, regardless of platform.
You just went out and spent the money, and never looked back.
Wrong again. Boy, you're making some big conclusions here. My tech partner and I actually did quite a bit of research before coming to the same conclusion. That aside, past experiences speak volumes. You stick with what you know works - what you have had good experience with.
I manage several servers all of them loafing along at under 2% utilization under heavy data access traffic from a multitude of work stations. That's 2% total system utilization for both samba and RAID combined. Software raid is fast, and lightweight.
Linux raid is faster and more lightweight than that of Windows, at least in my experience. But it's still CPU cycles that could instead be performed by a RAID card with onboard processing. And 2% is much lower than what I have experienced. If you're going to make that argument, then I guess hardware accelerated video cards are worthless too... let's just stick with software rendered openGL from now on....
With a raid card your system never gets any faster than that card. Beef up the server, and its still as slow as the card. Software raid improves with each server upgrade.
That's a stretch if I've ever seen one. Whether you're taking SATA or EIDE, you have the physical speed of the channel. A SATA1 RAID card runs at SATA1 speeds, along with the connected SATA1 channel. Same goes for SATA2 or EIDE. Upgrading operating systems won't change that. You've gotta change hardware to make that happen.
And if ever have one disk crash or a controller fail with hardware raid you have to go find another card to match, because the remaining raid drives are recorded using a proprietary scheme that is not transportable.
Could the card fail? Yeah. But individual drive can still be mounted as an individual drive and the data can be salvaged, if needed. Been there, done that. Proprietary schemes? My card saves the array data on the firmware of the card, not on the hard drives in the RAID array. So I don't know what RAID cards you've been using. In any event, the raids I use aren't dependent upon kernel modules and drivers that can go awry because of the onboard processing. A physical card is less prone to software related issues that can creep up.
With software raid you can mix scsi, ide, sata drives, and actually gain performance by doing so. With hardware raid you are locked into a specific type of disk, and usually a specific SIZE of disk.
Performance gains? That's a real stretch. When the card can take away processing cycles away from your processor, the hardware raid will speed things up, not the other way around. Now, remember... not all hardware raid cards are created equal. Some cards don't have onboard processors, which means they have to have drivers (i.e. kernel modules) to power them (the processing is done by the PC's CPU. In those cases, some of your arguments are more arguable. But I'm talking about RAID cards that have onboard processors. These are not the same beasts.
As for less reliable, I've had disks fail over time 8 years using software raid, but regardless of raid type (1 or 5) I've never lost any data.
Linux software raid, in my opinion, is more reliable than that of Windows (yeah, a real shocker). But I still don't think software raid can outperform our outlast hardware raid with onboard processing. I think a majority of system admins out there would agree.
And when drives did fail, recovery of the array was as simple as installing the replacement disk (or inserting a new hot-spare). Raid rebuild happened totally in the back ground without any down time beyond what was necessary for the actual disk swap. In one server, with hot removable drives, a drive swap on the fly was a total non-event. The users never knew it happened.
I have had my RAID break once. I entered the RAID card's BIOS and it was a trivial fix. I plugged the new drive in and told the BIOS to replace the bad drive in the array and wallah... it was done. RAID cards don't necessarily make additions or repair to your array more complicated. In fact one could argue that with some raid cards, it's EASIER to fix or setup than with software raid.
Whether or not to use hardware or software raid is a personal decision, and there are pros and cons to each. The biggest benefit of software raid, in my opinion, is the cost (no extra hardware to buy). But I think if reliability and performance are your priorities, hardware raid is the way to go. But then again, obviously not everyone sees things the same way... -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
Jason Bailey, Sun Advocate Webmaster wrote:
On Tue, Apr 1, 2008 at 6:48 PM, Jason Bailey, Sun Advocate Webmaster
wrote: I've always felt software raid was too much of a tax on the system, not to mention less reliable (regardless of OS).
Wrong on both counts.
Not so fast... just hold on there.
So wrong that merely stating that indicates you have never even used software raid.
Not so. I have used software RAID on many systems, just not on a SUSE Linux machine. I've simply never used YaST to set it up. I have mixed feelings about software raid, regardless of platform.
You just went out and spent the money, and never looked back.
Wrong again. Boy, you're making some big conclusions here. My tech partner and I actually did quite a bit of research before coming to the same conclusion. That aside, past experiences speak volumes. You stick with what you know works - what you have had good experience with.
I manage several servers all of them loafing along at under 2% utilization under heavy data access traffic from a multitude of work stations. That's 2% total system utilization for both samba and RAID combined. Software raid is fast, and lightweight.
Linux raid is faster and more lightweight than that of Windows, at least in my experience. But it's still CPU cycles that could instead be performed by a RAID card with onboard processing.
Minimal. *ALL* disk controller cards these days are bus-master cards. The O/S gives the controller cards 4 pieces of information: 1. The physical memory address of the I/O buffer 2. The size of the I/O operation to perform 3. Operation type (READ/WRITE) (in well-designed disk controllers,this is the high-bit in the same data word as the size data, like so: R S S S S S S S S S S S S S S S S S S S S S S S S S S S S S S S / 3 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 W 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 4. The logical location of the blocks to read or write. Once the controller card has that information, it performs the rest of the I/O operation on its own, and signals the CPU when the operation is finished. For software RAID, in the event of a write, the CPU sends the 4 pieces of control information (in 3 writes) for each mirroring. The only time software RAID is really a load on the CPU is in the event of write operations in RAID 5 or RAID 6, due to having to XOR all of the parallel blocks....and in this day and age, I can't see even that being a significant load compared to either a database engine or any non-trivial application + filesystem overhead.
And 2% is much lower than what I have experienced. If you're going to make that argument, then I guess hardware accelerated video cards are worthless too... let's just stick with software rendered openGL from now on....
That's a completely in-apt comparison.
With a raid card your system never gets any faster than that card. Beef up the server, and its still as slow as the card. Software raid improves with each server upgrade.
That's a stretch if I've ever seen one. Whether you're taking SATA or EIDE, you have the physical speed of the channel. A SATA1 RAID card runs at SATA1 speeds, along with the connected SATA1 channel. Same goes for SATA2 or EIDE. Upgrading operating systems won't change that. You've gotta change hardware to make that happen.
I think he was talking about the motherboard. With hardware RAID, you have to upgrade your RAID card, too.
And if ever have one disk crash or a controller fail with hardware raid you have to go find another card to match, because the remaining raid drives are recorded using a proprietary scheme that is not transportable.
Could the card fail? Yeah. But individual drive can still be mounted as an individual drive and the data can be salvaged, if needed. Been there, done that.
Proprietary schemes? My card saves the array data on the firmware of the card, not on the hard drives in the RAID array. So I don't know what RAID cards you've been using. In any event, the raids I use aren't dependent upon kernel modules and drivers that can go awry because of the onboard processing. A physical card is less prone to software related issues that can creep up.
That much is true. Of course, if the RAID card fails...
With software raid you can mix scsi, ide, sata drives, and actually gain performance by doing so. With hardware raid you are locked into a specific type of disk, and usually a specific SIZE of disk.
Performance gains? That's a real stretch. When the card can take away processing cycles away from your processor, the hardware raid will speed things up, not the other way around.
Other than RAID 5 and RAID 6, the difference is trivial.
Now, remember... not all hardware raid cards are created equal. Some cards don't have onboard processors, which means they have to have drivers (i.e. kernel modules) to power them (the processing is done by the PC's CPU. In those cases, some of your arguments are more arguable. But I'm talking about RAID cards that have onboard processors. These are not the same beasts.
But other than XOR operations for RAID 5/6, the amount of processing that these RAID cards do saves the main CPU a few hundred CPU cycles per additional disk to write, which isn't even one microsecond of difference. Considering that disk head seek times are still on the order of milliseconds, it's not something which would keep me awake at night if I was just doing some combination of mirroring and/or striping.
As for less reliable, I've had disks fail over time 8 years using software raid, but regardless of raid type (1 or 5) I've never lost any data.
Linux software raid, in my opinion, is more reliable than that of Windows (yeah, a real shocker). But I still don't think software raid can outperform our outlast hardware raid with onboard processing. I think a majority of system admins out there would agree.
But when it does fail.....hardware RAID can fail in a much more spectacular manner. Personally, I'll put up with some minor annoyances if it avoids spectacular failures. The best system admins are the ones who nobody knows, because users never even become aware of problems.
And when drives did fail, recovery of the array was as simple as installing the replacement disk (or inserting a new hot-spare). Raid rebuild happened totally in the back ground without any down time beyond what was necessary for the actual disk swap. In one server, with hot removable drives, a drive swap on the fly was a total non-event. The users never knew it happened.
I have had my RAID break once. I entered the RAID card's BIOS and it was a trivial fix. I plugged the new drive in and told the BIOS to replace the bad drive in the array and wallah... it was done. RAID cards don't necessarily make additions or repair to your array more complicated. In fact one could argue that with some raid cards, it's EASIER to fix or setup than with software raid.
Provided the RAID card doesn't fail AND you have a spare on site, AND you have all the partitioning data stored someplace else. There's a reason that HP and SUN use software RAID even in their high end machines. IBM seems to prefer hardware RAID, but then again, when it comes to HARDWARE, IBM is the industry leader (and in my 27 years of experience, their software tends to suck), and the hardware was originally developed for their mainframe lines....some of which don't even have an on/off switch.
Whether or not to use hardware or software raid is a personal decision, and there are pros and cons to each. The biggest benefit of software raid, in my opinion, is the cost (no extra hardware to buy). But I think if reliability and performance are your priorities, hardware raid is the way to go.
If your RAID cards are of IBM quality, yes. All others... I regard with suspicion.
But then again, obviously not everyone sees things the same way...
-- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
Jason Bailey, Sun Advocate Webmaster wrote:
I've always felt software raid was too much of a tax on the system, not to mention less reliable (regardless of OS). I've always preferred hardware raid. We use Escalade SATA RAID cards (RAID level 1) in our SUSE servers because the RAID cards have onboard processors, so the OS doesn't need to know the RAID is even there (no need for RAID drivers). They work really, really well... and that saves on CPU cycles.
On modern systems the cpu load is no longer an argument, even if you were to use software RAID5. The real argument is ease of use: software raid cannot include /boot, so you have to provide redundancy for /boot in some other way. I also insist on hotplug drives, so I can change a broken hdd without taking the system down. Even more important, I don't have to think about the necessary steps or commands, just rip out the broken drive and plug in the new one, the controller takes care of the rest. That is something I can do even if I suffer from sleep deprivation. (^-^) -- Sandy List replies only please! Please address PMs to: news-reply2 (@) japantest (.) homelinux (.) com -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
Sandy Drobic wrote:
The real argument is ease of use: software raid cannot include /boot, so you have to provide redundancy for /boot in some other way.
Well, as long as /boot is ext3 and one installs grub on both disks, it works. (I tested it by ripping out disks.) That said, your statement
I also insist on hotplug drives, so I can change a broken hdd without taking the system down. Even more important, I don't have to think about the necessary steps or commands, just rip out the broken drive and plug in the new one, the controller takes care of the rest.
is of course the real argument for hardware raid.
That is something I can do even if I suffer from sleep deprivation. (^-^)
Just make sure that you really rip out the broken disk. Just last month, I was called to a customer's site because some sleep-deprived sysadmin ripped out the working disk, and not the broken disk. Usually, I recommend my technicians to just let the RAID degrade and repair it with calm, when they are well rested -- after all, they just lost the failover possibility... ;-) Joachim -- =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- Joachim Schrod Email: jschrod@acm.org Roedermark, Germany -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 The Thursday 2008-04-03 at 03:10 +0200, Joachim Schrod wrote:
Just make sure that you really rip out the broken disk. Just last month, I was called to a customer's site because some sleep-deprived sysadmin ripped out the working disk, and not the broken disk. Usually, I recommend my technicians to just let the RAID degrade and repair it with calm, when they are well rested -- after all, they just lost the failover possibility... ;-)
I wonder why they don't put a LED on them, so that you can light it up from some software and see "that one". - -- Cheers, Carlos E. R. -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.4-svn0 (GNU/Linux) iD8DBQFH9DVrtTMYHG2NR9URAhOAAJ9FhLxnbNpE2Ke6nu4aB9SN42D7RwCfZEX8 iuDK+wssCq+tGHnL0XaxCek= =3GXU -----END PGP SIGNATURE----- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
Carlos E. R. pecked at the keyboard and wrote:
The Thursday 2008-04-03 at 03:10 +0200, Joachim Schrod wrote:
Just make sure that you really rip out the broken disk. Just last month, I was called to a customer's site because some sleep-deprived sysadmin ripped out the working disk, and not the broken disk. Usually, I recommend my technicians to just let the RAID degrade and repair it with calm, when they are well rested -- after all, they just lost the failover possibility... ;-)
I wonder why they don't put a LED on them, so that you can light it up from some software and see "that one".
-- Cheers, Carlos E. R.
The Compaq servers I worked on did just that. The tray that the hot-swap drive was in had status LED's and you could easily tell when a drive failed. -- Ken Schneider SuSe since Version 5.2, June 1998 -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
Carlos E. R. wrote:
The Thursday 2008-04-03 at 03:10 +0200, Joachim Schrod wrote:
Just make sure that you really rip out the broken disk. Just last month, I was called to a customer's site because some sleep-deprived sysadmin ripped out the working disk, and not the broken disk. Usually, I recommend my technicians to just let the RAID degrade and repair it with calm, when they are well rested -- after all, they just lost the failover possibility... ;-)
I wonder why they don't put a LED on them, so that you can light it up from some software and see "that one".
Well, if the hdd is broken the led probably won't get the signal anyway. (^-^) Normally, a server with a hardware raidcontroller and hotplug enclosure is SAF-TE aware, each hdd slot has an activity and a failure led, so the raid controller that monitors the disks will activate the failure led if the drive does not work correctly. This presumes that the drive enclosure is SAF-TE compliant and the raidcontroller is connected to the enclosure with the necessary signal cable. That means you stand in front of the server because the beeper of the raid controller is screaming so loud you can't ignore it. Then you take a look at the hotplug enclosure and see the drive slot with the constantly gleaming red led. You grumble a bit, rip out the drive and search for a compatible hdd. Plug in the new drive and watch the activity leds when the rebuild starts. Our systems usually have both the fault and activity led of the replacement slot blinking as long as the rebuild runs. All the while you get mails with the alerts and the status of the raid system. Since all of this doesn't require stopping or logging in to the system even a NOC technician is able to do this. Even your boss will (probably) be able to switch the broken drive. -- Sandy List replies only please! Please address PMs to: news-reply2 (@) japantest (.) homelinux (.) com -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On Thu, Apr 3, 2008 at 10:26 AM, Sandy Drobic
Normally, a server with a hardware raidcontroller and hotplug enclosure is SAF-TE aware, each hdd slot has an activity and a failure led, so the raid controller that monitors the disks will activate the failure led if the drive does not work correctly.
Not being a big believer in raid controllers (for smaller sized shops any way), just having a hot spare in your array will indicate the problem. Most drives these days do have an onboard led for drive activity. After a failure and an automatic rebuild onto the hot spare the one not flickering with any system activity is the dead one. (and also the cold one if no lights are present). Any disk cabinet which would allow a hot drive change will also have repeater lights where you can see them. Even small shops with single servers running raid 1 can afford a third disk. -- ----------JSA--------- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On Thu, Apr 3, 2008 at 7:57 PM, John Andersen
On Thu, Apr 3, 2008 at 10:26 AM, Sandy Drobic
wrote: Normally, a server with a hardware raidcontroller and hotplug enclosure is SAF-TE aware, each hdd slot has an activity and a failure led, so the raid controller that monitors the disks will activate the failure led if the drive does not work correctly.
Not being a big believer in raid controllers (for smaller sized shops any way), just having a hot spare in your array will indicate the problem.
Most drives these days do have an onboard led for drive activity. After a failure and an automatic rebuild onto the hot spare the one not flickering with any system activity is the dead one. (and also the cold one if no lights are present).
Any disk cabinet which would allow a hot drive change will also have repeater lights where you can see them.
Even small shops with single servers running raid 1 can afford a third disk.
-- ----------JSA---------
Question: I hear everyone speaking about the hotpugability of (good) hard raid controllers, but is it possible to hot plug a new SATA drive into a soft raid system? Are there motherboards that do not support hot plugging of SATA drives? For IDE it is probably impossible (IDE was not designed for it I belive) unless the hardware RAID controller would have special possibilities for it, I believe. Thanks for taking the time to think about it Neil -- There are two kinds of people: 1. People who start their arrays with 1. 1. People who start their arrays with 0. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On Fri, 2008-04-04 at 15:41 +0200, Neil wrote:
Question: I hear everyone speaking about the hotpugability of (good) hard raid controllers, but is it possible to hot plug a new SATA drive into a soft raid system? Are there motherboards that do not support hot plugging of SATA drives? For IDE it is probably impossible (IDE was not designed for it I belive) unless the hardware RAID controller would have special possibilities for it, I believe. Thanks for taking the time to think about it
Many SuperMicro systems support this. We do this with their systems with 4 SATA drive bays that we swap. Of course, we still need to do a umount/mount. And, this can, if needed, be coordinated with udev so you can specify extra stuff to happen when a disk is inserted. -- Roger Oberholtzer OPQ Systems / Ramböll RST Ramböll Sverige AB Kapellgränd 7 P.O. Box 4205 SE-102 65 Stockholm, Sweden Office: Int +46 8-615 60 20 Mobile: Int +46 70-815 1696 -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On Fri, Apr 4, 2008 at 10:41 AM, Roger Oberholtzer
On Fri, 2008-04-04 at 15:41 +0200, Neil wrote:
Question: I hear everyone speaking about the hotpugability of (good) hard raid controllers, but is it possible to hot plug a new SATA drive into a soft raid system? Are there motherboards that do not support hot plugging of SATA drives? For IDE it is probably impossible (IDE was not designed for it I belive) unless the hardware RAID controller would have special possibilities for it, I believe. Thanks for taking the time to think about it
Many SuperMicro systems support this. We do this with their systems with 4 SATA drive bays that we swap. Of course, we still need to do a umount/mount. And, this can, if needed, be coordinated with udev so you can specify extra stuff to happen when a disk is inserted.
Hot swap requires support in both the hardware and the driver. More and more hotswap functionality in Linux with every kernel release. (I have yet to get SATA hot swap to work in Windows which really surprises me. I've randomly tried 2K/XP/2003 with our various machines.) Assuming you have hotswap support: I can't say about now, but 6 months ago if you had a SATA drive fail and attempted a hotswap, the new drive would be recognized and assigned a new unique /dev/sdX name. Then you had to reconfigure your raid setup to use the new disk. It was definitely not as clean as a hardware raid controller. I hope the kernel team has been working on that. Who knows maybe 11.0 will have a more automated raid rebuild capability. Greg -- Greg Freemyer Litigation Triage Solutions Specialist http://www.linkedin.com/in/gregfreemyer First 99 Days Litigation White Paper - http://www.norcrossgroup.com/forms/whitepapers/99%20Days%20whitepaper.pdf The Norcross Group The Intersection of Evidence & Technology http://www.norcrossgroup.com -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
Greg Freemyer wrote:
On Fri, Apr 4, 2008 at 10:41 AM, Roger Oberholtzer
wrote: On Fri, 2008-04-04 at 15:41 +0200, Neil wrote:
Question: I hear everyone speaking about the hotpugability of (good) hard raid controllers, but is it possible to hot plug a new SATA drive into a soft raid system? Are there motherboards that do not support hot plugging of SATA drives? For IDE it is probably impossible (IDE was not designed for it I belive) unless the hardware RAID controller would have special possibilities for it, I believe. Thanks for taking the time to think about it
Many SuperMicro systems support this. We do this with their systems with 4 SATA drive bays that we swap. Of course, we still need to do a umount/mount. And, this can, if needed, be coordinated with udev so you can specify extra stuff to happen when a disk is inserted.
Hot swap requires support in both the hardware and the driver. More and more hotswap functionality in Linux with every kernel release. (I have yet to get SATA hot swap to work in Windows which really surprises me. I've randomly tried 2K/XP/2003 with our various machines.)
Assuming you have hotswap support:
I can't say about now, but 6 months ago if you had a SATA drive fail and attempted a hotswap, the new drive would be recognized and assigned a new unique /dev/sdX name. Then you had to reconfigure your raid setup to use the new disk. It was definitely not as clean as a hardware raid controller.
One more reason to use volume labels instead of /dev/xdX names
I hope the kernel team has been working on that. Who knows maybe 11.0 will have a more automated raid rebuild capability.
-- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Content-ID:
I can't say about now, but 6 months ago if you had a SATA drive fail and attempted a hotswap, the new drive would be recognized and assigned a new unique /dev/sdX name. Then you had to reconfigure your raid setup to use the new disk. It was definitely not as clean as a hardware raid controller.
One more reason to use volume labels instead of /dev/xdX names
Tell me, how do you define a volume label for a disk that has not yet been formatted nor partitioned? And do so in a manner that is recognised by the raid setup before it does its things. - -- Cheers, Carlos E. R. -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.4-svn0 (GNU/Linux) iD8DBQFH901RtTMYHG2NR9URAuCfAJsGREZwHr7A7GFrMFtj5n/queAkMQCdEzC5 BkJCFNjW1scF725zhoMFEpY= =EU86 -----END PGP SIGNATURE----- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On Fri, Apr 4, 2008 at 9:36 AM, Greg Freemyer
I can't say about now, but 6 months ago if you had a SATA drive fail and attempted a hotswap, the new drive would be recognized and assigned a new unique /dev/sdX name. Then you had to reconfigure your raid setup to use the new disk.
Correct me if I'm wrong, but it seems to me that udev provided the persistent naming of block devices and also provides a method to fix this, and the requirement to adjust one's raid setup was unfamiliarity with this (new) feature. -- ----------JSA--------- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On Sat, Apr 5, 2008 at 2:36 AM, John Andersen
On Fri, Apr 4, 2008 at 9:36 AM, Greg Freemyer
wrote: I can't say about now, but 6 months ago if you had a SATA drive fail and attempted a hotswap, the new drive would be recognized and assigned a new unique /dev/sdX name. Then you had to reconfigure your raid setup to use the new disk.
Correct me if I'm wrong, but it seems to me that udev provided the persistent naming of block devices and also provides a method to fix this, and the requirement to adjust one's raid setup was unfamiliarity with this (new) feature.
I don't know any details. I can just say it was being discussed on LKML and I don't recall the answer being "use udev". If someone knows if it has been addressed I would be interested in a url to read about it. Greg -- Greg Freemyer Litigation Triage Solutions Specialist http://www.linkedin.com/in/gregfreemyer First 99 Days Litigation White Paper - http://www.norcrossgroup.com/forms/whitepapers/99%20Days%20whitepaper.pdf The Norcross Group The Intersection of Evidence & Technology http://www.norcrossgroup.com -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
Sandy Drobic wrote:
On modern systems the cpu load is no longer an argument, even if you were to use software RAID5. The real argument is ease of use: software raid cannot include /boot, so you have to provide redundancy for /boot in some other way.
Hmmm. Sandy, I don't know if that is true anymore. The last 3 software raid systems I set up, using Yast, everything except swap was included in the raid array. That included / /boot and /home. I agree that you are only using one disk to "boot" from with grub using: title openSUSE 10.3 - 2.6.23.17-ccj64 root (hd0,4) However, hd0 is mapped to the raid array where: [22:40 nirvana/boot] # cat grub/device.map (hd0) /dev/mapper/nvidia_hacfgfda So I am booting from the 5th partition (or the pc slice number in the BSD terminology) where "4" the partition is counted from "0". hd0 is the software raid array built from /dev/sda and /dev/sdb in raid1: 22:28 nirvana~> sudo dmraid -r /dev/sda: nvidia, "nvidia_hacfgfda", mirror, ok, 976773166 sectors, data@ 0 /dev/sdb: nvidia, "nvidia_hacfgfda", mirror, ok, 976773166 sectors, data@ 0 with partitions: 22:18 nirvana~> cat /etc/fstab /dev/mapper/nvidia_hacfgfda_part7 / ext3 acl,user_xattr 1 1 /dev/mapper/nvidia_hacfgfda_part5 /boot ext3 acl,user_xattr 1 2 /dev/mapper/nvidia_hacfgfda_part8 /home ext3 acl,user_xattr 1 2 /dev/mapper/nvidia_hacfgfda_part6 swap swap defaults 0 0 So in my case, the system is booting off from (hd0,4) or the 5th partition on the raid array with is /boot which is mirrored between /dev/sda and /dev/sdb. I don't know whether grub has internal logic to select an individual disk to boot from, but I believe that the physical disk boot selection is governed by the BIOS selection of hard disk boot priority. So it appears that /boot is mirrored and also used to boot as well. -- David C. Rankin, J.D., P.E. Rankin Law Firm, PLLC 510 Ochiltree Street Nacogdoches, Texas 75961 Telephone: (936) 715-9333 Facsimile: (936) 715-9339 www.rankinlawfirm.com -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
David C. Rankin wrote:
Sandy Drobic wrote:
On modern systems the cpu load is no longer an argument, even if you were to use software RAID5. The real argument is ease of use: software raid cannot include /boot, so you have to provide redundancy for /boot in some other way.
Hmmm. Sandy,
I don't know if that is true anymore. The last 3 software raid systems I set up, using Yast, everything except swap was included in the raid array. That included / /boot and /home. I agree that you are only using one disk to "boot" from with grub using:
title openSUSE 10.3 - 2.6.23.17-ccj64 root (hd0,4)
However, hd0 is mapped to the raid array where:
[22:40 nirvana/boot] # cat grub/device.map (hd0) /dev/mapper/nvidia_hacfgfda
So I am booting from the 5th partition (or the pc slice number in the BSD terminology) where "4" the partition is counted from "0". hd0 is the software raid array built from /dev/sda and /dev/sdb in raid1:
22:28 nirvana~> sudo dmraid -r /dev/sda: nvidia, "nvidia_hacfgfda", mirror, ok, 976773166 sectors, data@ 0 /dev/sdb: nvidia, "nvidia_hacfgfda", mirror, ok, 976773166 sectors, data@ 0
with partitions:
22:18 nirvana~> cat /etc/fstab /dev/mapper/nvidia_hacfgfda_part7 / ext3 acl,user_xattr 1 1 /dev/mapper/nvidia_hacfgfda_part5 /boot ext3 acl,user_xattr 1 2 /dev/mapper/nvidia_hacfgfda_part8 /home ext3 acl,user_xattr 1 2 /dev/mapper/nvidia_hacfgfda_part6 swap swap defaults 0 0
So in my case, the system is booting off from (hd0,4) or the 5th partition on the raid array with is /boot which is mirrored between /dev/sda and /dev/sdb.
I don't know whether grub has internal logic to select an individual disk to boot from, but I believe that the physical disk boot selection is governed by the BIOS selection of hard disk boot priority.
So it appears that /boot is mirrored and also used to boot as well.
David, I think you are right. I have a RAID only system under 10.3. I think the only constraint is that the partition that contains /boot needs to be mirrored as a RAID 1. In my case / and /home and all the rest except for /boot are in a RAID 5 1.5TB array. I even have SWAP in a RAID 0 array (that was just to see if it would work for 'purity' sake). I know some say Swap should not be in a RAID n because it can do it itself, but for sake of both testing and disk management of the 4 disks I use as arrays, it was expedient to do and it works flawlessly. I DO NOT have ANY separate partititon used for the purpose of booting. The only concession I had to make was to ensure that the MBR was written to BOTH sda and sdb or it was a crap-shoot as to which one BIOS would pick for initial boot before the OS got involved. Usually, it would work, but sometimes I had to reset several times before I tried the workaround. Now it is rock solid. Don't ask me why, as the theory as I understand it says the BIOS shouldn't know or care about SuSE's software raid schemes. In addition, I have a HARDWARE RAID 5 controller for an additional 4 drives with another 2 TB of space which becomes available as soon as the OSs loads the driver module at boot time, but I can't boot from those drives because BIOS can't see them. I think the support for mirrored /boot started with 10.2 but I didn't start playing with it until beta of 10.3. Support for 'fake-raid' on the motherboard is 'iffy' in SuSE. I've seen reports where it works, but on my ASUS, as of 10.3, it doesn't work correctly. Maybe 11.0 will work. I bought another set of drives to test that and I have a 2nd MB with a 'fake-raid' controller that I will use to test it. I'll probably switch to software only raid regardless so I don't lose everything if the MB goes south and the replacement uses a different support scheme, but it would be nice to know if it works :) -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
John Bown wrote:
Hello everyone. I have an old server with an IDE RAID card in it on which I'd like to install openSUSE 10.3. The problem is, support for said RAID card (a Dell CERC ATA/100) has been discontinued for some time now. With that I ask, how feasible and/or advisable would it be to attempt the following?
1) Install openSUSE and configure it to use software RAID (two mirrored IDE drives (master/slave) for system, two stripped IDE drives (master/slave) for data) 2) Enable Encrypted File System (EFS)
Basically, I'm worried that an IDE based machine will be painfully slow due to the high disk activity. Ideally I would use the machine's existing SCSI U320 interface, but the required hard drives are just too expensive.
Get some modern SATA disks. They're doing I/O at 300 Gbyte/s now. While they don't have out-of-order queueing like SCSI and SAS, they do offer high burst speeds for I/O (Serial Attached SCSI...same cables but use SAS cards -- which conveniently can ALSO control SATA disks, too. Each port individually determines if that cable is attached to a SATA or an SAS disk.
Since the machine has two 2.4GHz Xeon processors in it, couldn't I designate one to do nothing but RAID and encryption,
That would be the master/slave CPU model of running a *nix kernel. While it was cutting edge in 1982, it was obsolete by 1985.
thereby leaving the other processor free to do everything else, such as running virtual machines? If so, can anyone point me in the direction of a good online how-to?
Considering that your 2.4 GHz processor runs several orders of magnitude faster than your disk drives can send or recieve data, AND that most controllers have bus-master capability, there's really no point in this. Disk I/O has very little impact on modern CPUs, unless you're running Serial Attached SCSI, and say, doing several hundred thousand database transactions/second on a database which is spread across a few thousand disk drives. And even then, the overwhelming majority of the disk I/O load will be on the bus-master controller cards, not the CPU cores.
Thank you for your collective time.
We're her to help. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On Wed, Apr 2, 2008 at 5:26 PM, Sam Clemens
John Bown wrote:
Hello everyone. I have an old server with an IDE RAID card in it on which I'd like to install openSUSE 10.3. The problem is, support for said RAID card (a Dell CERC ATA/100) has been discontinued for some time now. With that I ask, how feasible and/or advisable would it be to attempt the following?
1) Install openSUSE and configure it to use software RAID (two mirrored IDE drives (master/slave) for system, two stripped IDE drives (master/slave) for data) 2) Enable Encrypted File System (EFS)
Basically, I'm worried that an IDE based machine will be painfully slow due to the high disk activity. Ideally I would use the machine's existing SCSI U320 interface, but the required hard drives are just too expensive.
Get some modern SATA disks. They're doing I/O at 300 Gbyte/s now. While they don't have out-of-order queueing like SCSI and SAS, they do offer high burst speeds for I/O (Serial Attached SCSI...same cables but use SAS cards -- which conveniently can ALSO control SATA disks, too. Each port individually determines if that cable is attached to a SATA or an SAS disk.
A few misconceptions there: 1) At least for now a single drive does not have the ability to saturate a 150 Gbyte/sec connection. You only need 300 MB/sec if your using PMP to multiplex multiple drives on one cable. And then you need to be using PCI express because PCI is also too slow to effectively use 300 MB/sec. 2) SATA-2 drives do now support out of order queueing (NCQ or TCQ, I've forgotten). But the benchmarks are showing that the Linux Kernel elevators work so well that little (or nothing) is gained from letting the drive empty cache out of order. Apparently there are some specific workloads where NCQ is a win. 3) Strangely, most drives do tie 300 MB/sec to the Sata-2 functions set, so if you use the 150 MB/sec throttle jumper to slow down the drive, you loose the NCQ function. 4) Ignoring SAS for a second: I assume the old MB under discussion does not have any PCIe slots, so I don't think you will find a sata-2 controller for PCI, so I think you are stuck with Sata-1 functionality and 150MB/sec speed. I don't know about SAS controllers. I have not researched them. Greg -- Greg Freemyer Litigation Triage Solutions Specialist http://www.linkedin.com/in/gregfreemyer First 99 Days Litigation White Paper - http://www.norcrossgroup.com/forms/whitepapers/99%20Days%20whitepaper.pdf The Norcross Group The Intersection of Evidence & Technology http://www.norcrossgroup.com -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
Greg Freemyer wrote:
On Wed, Apr 2, 2008 at 5:26 PM, Sam Clemens
wrote: John Bown wrote:
Hello everyone. I have an old server with an IDE RAID card in it on which I'd like to install openSUSE 10.3. The problem is, support for said RAID card (a Dell CERC ATA/100) has been discontinued for some time now. With that I ask, how feasible and/or advisable would it be to attempt the following?
1) Install openSUSE and configure it to use software RAID (two mirrored IDE drives (master/slave) for system, two stripped IDE drives (master/slave) for data) 2) Enable Encrypted File System (EFS)
Basically, I'm worried that an IDE based machine will be painfully slow due to the high disk activity. Ideally I would use the machine's existing SCSI U320 interface, but the required hard drives are just too expensive.
Get some modern SATA disks. They're doing I/O at 300 Gbyte/s now. While they don't have out-of-order queueing like SCSI and SAS, they do offer high burst speeds for I/O (Serial Attached SCSI...same cables but use SAS cards -- which conveniently can ALSO control SATA disks, too. Each port individually determines if that cable is attached to a SATA or an SAS disk.
A few misconceptions there:
1) At least for now a single drive does not have the ability to saturate a 150 Gbyte/sec connection. You only need 300 MB/sec if your using PMP to multiplex multiple drives on one cable. And then you need to be using PCI express because PCI is also too slow to effectively use 300 MB/sec.
I'm not saying that he needs it...only that it's available.
2) SATA-2 drives do now support out of order queueing (NCQ or TCQ, I've forgotten). But the benchmarks are showing that the Linux Kernel elevators work so well that little (or nothing) is gained from letting the drive empty cache out of order. Apparently there are some specific workloads where NCQ is a win.
3) Strangely, most drives do tie 300 MB/sec to the Sata-2 functions set, so if you use the 150 MB/sec throttle jumper to slow down the drive, you loose the NCQ function.
Why deliberately throttle down a drive? They're slow enough already. And the SAS/SATA negotiation standard specifies that the controller and the disk are supposed to negotiate their speed.
4) Ignoring SAS for a second: I assume the old MB under discussion does not have any PCIe slots, so I don't think you will find a sata-2 controller for PCI, so I think you are stuck with Sata-1 functionality and 150MB/sec speed. I don't know about SAS controllers. I have not researched them.
Go to Adaptec's site. There's a lot of whitepapers on the subject...and here's a cool presentation by HP on PHY and signal layer: www.scsita.org/aboutscsi/sas/tutorials/SAS_Phy_layer.pdf -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On Wed, Apr 2, 2008 at 6:24 PM, Sam Clemens
Greg Freemyer wrote:
<snip>
3) Strangely, most drives do tie 300 MB/sec to the Sata-2 functions set, so if you use the 150 MB/sec throttle jumper to slow down the drive, you loose the NCQ function.
Why deliberately throttle down a drive? They're slow enough already. And the SAS/SATA negotiation standard specifies that the controller and the disk are supposed to negotiate their speed.
One of the standard troubleshooting techniques on lkml-ide is to throttle down the drive. A surprising number of times it fixes the problem. And the 300 MB/sec Seagates I've been buying all come with the jumper installed. (ie. they are throttled when shipped.) FYI: We buy about 20 to 50 drives a month for our lab depending on what is going on. First thing we do is wipe them. (dd if=/dev/zero of=/dev/sdb). We occasionally try wiping with and without the jumper installed. So far no difference, but we tend to use PCI based controllers, so the PCI bus could be our real throttle. Greg -- Greg Freemyer Litigation Triage Solutions Specialist http://www.linkedin.com/in/gregfreemyer First 99 Days Litigation White Paper - http://www.norcrossgroup.com/forms/whitepapers/99%20Days%20whitepaper.pdf The Norcross Group The Intersection of Evidence & Technology http://www.norcrossgroup.com -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
participants (13)
-
Carlos E. R.
-
David C. Rankin
-
Greg Freemyer
-
Jason Bailey, Sun Advocate Webmaster
-
Joachim Schrod
-
John Andersen
-
John Bown
-
Ken Schneider
-
Neil
-
Richard Creighton
-
Roger Oberholtzer
-
Sam Clemens
-
Sandy Drobic