[opensuse] 10.2 no RAID to 11.0 RAID 1
Hi all, I have recently had a failure of a HDD, after a backup thankfully(if its any good). I have decided to implement software RAID 1(mirroring) onto a second drive of the same size as opposed to going the costly option of RAID 5 or via a DROBO. I see 11.0 has an option under the partitioner in YAST for RAID and I am wondering if thee 11.0 install will allow me to create the RAID array and set it up if it detects two drives of the same size and with identical partitions? Wondering Hylton -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
Hylton Conacher (ZR1HPC) wrote:
I see 11.0 has an option under the partitioner in YAST for RAID and I am wondering if thee 11.0 install will allow me to create the RAID array and set it up if it detects two drives of the same size and with identical partitions?
Yes it will. /Per Jessen, Zürich -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On 2008/09/20 15:37 (GMT-0400) Per Jessen composed:
Hylton Conacher (ZR1HPC) wrote:
I see 11.0 has an option under the partitioner in YAST for RAID and I am wondering if thee 11.0 install will allow me to create the RAID array and set it up if it detects two drives of the same size and with identical partitions?
Yes it will.
If you have a RAID1 BIOS built into the motherboard, are there any reasons not to use it, as opposed to just using pure software RAID1? -- "Unless the Lord builds the house, its builders labor in vain." Psalm 127:1 NIV Team OS/2 ** Reg. Linux User #211409 Felix Miata *** http://fm.no-ip.com/ -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 The Saturday 2008-09-20 at 19:51 -0400, Felix Miata wrote:
If you have a RAID1 BIOS built into the motherboard, are there any reasons not to use it, as opposed to just using pure software RAID1?
I could name a lot of reasons :-p Like bios raid not being real hardware raid, not portable, less flexible. - -- Cheers, Carlos E. R. -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.9 (GNU/Linux) iEYEARECAAYFAkjVjrEACgkQtTMYHG2NR9UchwCgimk6VI1qymk3Mn3L81rn5fmT GU8An2aDQYS5eSLaILw18yYrGflnDoJh =AhwW -----END PGP SIGNATURE----- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On 2008/09/21 02:00 (GMT+0200) Carlos E. R. composed:
The Saturday 2008-09-20 at 19:51 -0400, Felix Miata wrote:
If you have a RAID1 BIOS built into the motherboard, are there any reasons not to use it, as opposed to just using pure software RAID1?
I could name a lot of reasons :-p
Like bios raid not being real hardware raid,
This matters why?
not portable,
Portable to what? Why is not portable a problem?
less flexible.
How? -- "Unless the Lord builds the house, its builders labor in vain." Psalm 127:1 NIV Team OS/2 ** Reg. Linux User #211409 Felix Miata *** http://fm.no-ip.com/ -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
Felix Miata wrote:
On 2008/09/21 02:00 (GMT+0200) Carlos E. R. composed:
The Saturday 2008-09-20 at 19:51 -0400, Felix Miata wrote:
If you have a RAID1 BIOS built into the motherboard, are there any reasons not to use it, as opposed to just using pure software RAID1?
I could name a lot of reasons :-p
Like bios raid not being real hardware raid,
This matters why?
Dunno, I have 3 bios fake raid0(s) and 2 software raid0 system. Can't tell any difference except I have to type mdxxx for some stuff on software raid and dmxxx on the bios raid setups. The bios raid just lets me rebuild an array before the OS boots, other than that, I understood that bios raid was simply software raid anyway.
not portable,
Portable to what? Why is not portable a problem?
I haven't figured this one out either. I can pull a bios raid drive and put it in another machine and read it just fine... I've never tried to move a whole array from one box to another though. Maybe this is where the portability issue creeps in. If so, I'll never notice it.
less flexible.
How?
Waiting on answer... -- David C. Rankin, J.D., P.E. Rankin Law Firm, PLLC 510 Ochiltree Street Nacogdoches, Texas 75961 Telephone: (936) 715-9333 Facsimile: (936) 715-9339 www.rankinlawfirm.com -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 The Saturday 2008-09-20 at 22:36 -0500, David C. Rankin wrote:
The Saturday 2008-09-20 at 19:51 -0400, Felix Miata wrote:
If you have a RAID1 BIOS built into the motherboard, are there any reasons not to use it, as opposed to just using pure software RAID1?
I could name a lot of reasons :-p
Like bios raid not being real hardware raid,
This matters why?
Dunno, I have 3 bios fake raid0(s) and 2 software raid0 system. Can't tell any difference except I have to type mdxxx for some stuff on software raid and dmxxx on the bios raid setups. The bios raid just lets me rebuild an array before the OS boots, other than that, I understood that bios raid was simply software raid anyway.
Exactly, this is why. Real hardware raid is faster, there is no intervention from the cpu. All is done by the card hardware. Therefore, if all I'm going to get is a fake raid, I prefer real software that I have control of.
not portable,
Portable to what? Why is not portable a problem?
I haven't figured this one out either. I can pull a bios raid drive and put it in another machine and read it just fine... I've never tried to move a whole array from one box to another though. Maybe this is where the portability issue creeps in. If so, I'll never notice it.
Non portable means that if your mobo dies, you can not put your raid into a mobo of a different make. You need one with the same type of bios raid.
less flexible.
How?
Waiting on answer...
Less options. Less repair tools. Less choices of how to setup the array. For instance, with software raid you can have different disks (sizes, makes, speeds, partitions...) - -- Cheers, Carlos E. R. -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.9 (GNU/Linux) iEYEARECAAYFAkjWFRMACgkQtTMYHG2NR9WcfgCePZVIfNL+yIWOiOKxmMz3CbO3 d84An1QDHkbRKdij8mngag8PbKsyioRv =+gfo -----END PGP SIGNATURE----- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On 2008/09/21 11:34 (GMT+0200) Carlos E. R. composed:
The Saturday 2008-09-20 at 22:36 -0500, David C. Rankin wrote:
The Saturday 2008-09-20 at 19:51 -0400, Felix Miata wrote:
If you have a RAID1 BIOS built into the motherboard, are there any reasons not to use it, as opposed to just using pure software RAID1?
I could name a lot of reasons :-p
Like bios raid not being real hardware raid,
This matters why?
Dunno, I have 3 bios fake raid0(s) and 2 software raid0 system. Can't tell any difference except I have to type mdxxx for some stuff on software raid and dmxxx on the bios raid setups. The bios raid just lets me rebuild an array before the OS boots, other than that, I understood that bios raid was simply software raid anyway.
Exactly, this is why. Real hardware raid is faster, there is no intervention from the cpu. All is done by the card hardware.
No one was disputing the superiority of hardware RAID over software RAID, but hardware RAID was not among the options being compared.
Therefore
HWR not relevant, so nothing can flow from the point.
if all I'm going to get is a fake raid, I prefer real software that I have control of.
And BIOS RAID1 provides no control? Every RAID BIOS setup utility I've opened seems to provide quite a bit of control, and is quick and easy to get into, unlike waiting more than a minute to get an OS booted that expects the HDs to already be configured.
not portable,
Portable to what? Why is not portable a problem?
I haven't figured this one out either. I can pull a bios raid drive and put it in another machine and read it just fine... I've never tried to move a whole array from one box to another though. Maybe this is where the portability issue creeps in. If so, I'll never notice it.
Non portable means that if your mobo dies, you can not put your raid into a mobo of a different make. You need one with the same type of bios raid.
By definition, we are only talking simple RAID mirroring here. I don't see how anyone wouldn't be able to put a pair of disks in another machine, and use the built in setup program to choose re-make mirror set from the disks provided.
less flexible.
How?
Waiting on answer...
Less options. Less repair tools. Less choices of how to setup the array.
For instance, with software raid you can have different disks (sizes, makes, speeds, partitions...)
Maybe since you're so strong on software RAID you don't even know these things might exist in BIOS RAID1? I don't see how a different speed or brand could matter, and I know that normally having disks of different size means the size of the RAID1 will be limited to the size of the smaller disk. As to any option to have different partitioning, I don't see how that could or should be reason enough to prefer pure software to the simplicity of BIOS management and failed device replacement. RAID1 really doesn't seem to me to require any setup complexity at all. -- "Unless the Lord builds the house, its builders labor in vain." Psalm 127:1 NIV Team OS/2 ** Reg. Linux User #211409 Felix Miata *** http://fm.no-ip.com/ -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 The Sunday 2008-09-21 at 07:38 -0400, Felix Miata wrote:
Exactly, this is why. Real hardware raid is faster, there is no intervention from the cpu. All is done by the card hardware.
No one was disputing the superiority of hardware RAID over software RAID, but hardware RAID was not among the options being compared.
Then let me put it in different words: there being no speed advantage of fake raid over software raid, the choice is obvious: either full hardware, or full software. I was not propossing to use full hardware.
Therefore
HWR not relevant, so nothing can flow from the point.
PFFFFF! Common, get real. If you take things like that, aggressively, then I'll put my opinion bluntly: It is relevant. I don't consider fake raid as an option. I want either real hardware raid, or full software raid, and opensource as that. I want no fakes. I hate fake modems, fake wifi cards, fake tv cards, fake printers, fakes etcetera. I want real things.
Non portable means that if your mobo dies, you can not put your raid into a mobo of a different make. You need one with the same type of bios raid.
By definition, we are only talking simple RAID mirroring here. I don't see how anyone wouldn't be able to put a pair of disks in another machine, and use the built in setup program to choose re-make mirror set from the disks provided.
Oh, yes, because I have seen fake raid disks, in perfect good condition, fail because the mobo failed, and the data be irretrievable because no other mobo was able to use them. They had to be reformatted in full, all data lost. I have been bitten. I don not want any fake raid near me or anybody I might support. There are no advantages in fake raid that I can see. They are good for windows, I suppose. - -- Cheers, Carlos E. R. -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.9 (GNU/Linux) iEYEARECAAYFAkjWYHwACgkQtTMYHG2NR9VPhQCcDBwWAM4AJIy2N0r/za70WmOb PeIAn2bgQJyve1YZ1Yu0ctdXhG0e+aSA =hNkf -----END PGP SIGNATURE----- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
Felix Miata wrote:
On 2008/09/20 15:37 (GMT-0400) Per Jessen composed:
Hylton Conacher (ZR1HPC) wrote:
I see 11.0 has an option under the partitioner in YAST for RAID and I am wondering if thee 11.0 install will allow me to create the RAID array and set it up if it detects two drives of the same size and with identical partitions?
Yes it will.
If you have a RAID1 BIOS built into the motherboard, are there any reasons not to use it, as opposed to just using pure software RAID1?
With software RAID, you get to use mdadmd to monitor your array. /Per Jessen, Zürich -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On 2008/09/21 17:34 (GMT+0200) Per Jessen composed:
Felix Miata wrote:
On 2008/09/20 15:37 (GMT-0400) Per Jessen composed:
Hylton Conacher (ZR1HPC) wrote:
I see 11.0 has an option under the partitioner in YAST for RAID and I am wondering if thee 11.0 install will allow me to create the RAID array and set it up if it detects two drives of the same size and with identical partitions?
Yes it will.
If you have a RAID1 BIOS built into the motherboard, are there any reasons not to use it, as opposed to just using pure software RAID1?
With software RAID, you get to use mdadmd to monitor your array.
I don't know whether "get to" is a good thing or a bad thing. Now I don't use any RAID, and don't do any "monitoring" of ordinary HDs. What's to monitor? Can't I perform similar function with the RAID BIOS utility? Surely there must be some advantage of ICH8R chip over ICH8 chip. BIOS RAID is certainly not 100% in the motherboard BIOS if accompanied by other function incorporated into I/O chip (ICH9R) or by additional I/O chip (IT821x), and cost more than motherboard that has neither. -- "Unless the Lord builds the house, its builders labor in vain." Psalm 127:1 NIV Team OS/2 ** Reg. Linux User #211409 Felix Miata *** http://fm.no-ip.com/ -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
Felix Miata wrote:
On 2008/09/20 15:37 (GMT-0400) Per Jessen composed:
Hylton Conacher (ZR1HPC) wrote:
I see 11.0 has an option under the partitioner in YAST for RAID and I am wondering if thee 11.0 install will allow me to create the RAID array and set it up if it detects two drives of the same size and with identical partitions?
Yes it will.
If you have a RAID1 BIOS built into the motherboard, are there any reasons not to use it, as opposed to just using pure software RAID1?
Well, there are a few advantages, IMO, to prefer Linux Software Raid over a pseudo-hardware guide: 1) There is the portability issue. I work with a few real-hardware RAID controllers and I can tell from experience: Having to rely on the Hardware Manufactures can cost you money. Just an example: if you have say, an HP RAID card, and your RAID card "dies", you sometimes have to wait a lot of time to get a compatible replacement. Of course this is not always the case but, it happens a few times, especially on old hardware. So, in what portability is concerned, I would prefer the Linux Raid option. 2) Speed. Your pseudo-raid mobo card, does not do any math for you ( if required ), but will require the processor to do it all. This is just on RAID 4/5/6 setup. If you use RAID0, all information is striped onto you HD's. I don't know if there is any performance improvements with either of the solutions but, probably there isn't. With a RAID 1, all information is mirrored to another HD. In this options I would prefer to use the Linux Raid and rely on Linux to take care of all the asynchronous writes. In what reading is concerned, Software RAID has proven effective, as can reed from both HD's with great overall results. 3) Interoperability. With most Intel's chipsets you can hotplug your HD's. Just use the mdadm tool to remove the HD from the array, remove the HD, add a new one and rebuild your array. You can do this while your machine is in production mode. I believe you are unable to do that with a mobo pseudo-raid ? 4) Routine: If you have a lot a machines to set-up, you can have a monitoring system witch is common to all the machines. You can save time and money by just having one monitoring system. Well, here is my 1 cent... -- Rui Santos http://www.ruisantos.com/ Veni, vidi, Linux! -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On Mon, Sep 22, 2008 at 5:52 AM, Rui Santos
3) Interoperability. With most Intel's chipsets you can hotplug your HD's. Just use the mdadm tool to remove the HD from the array, remove the HD, add a new one and rebuild your array. You can do this while your machine is in production mode. I believe you are unable to do that with a mobo pseudo-raid ?
You forgot to mention that mdadm is CRAP. I will de-sync the array for no reason and there is no way to force a rebuild. You will end up having to rebuild the array. Raid 0 & 5 will suffer full data loss. Also, the md RAID webpage *REALLY* instills confidence: http://cgi.cse.unsw.edu.au/~neilb/SoftRaid -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On 9/22/08, Andrew Joakimsen
On Mon, Sep 22, 2008 at 5:52 AM, Rui Santos
wrote: 3) Interoperability. With most Intel's chipsets you can hotplug your HD's. Just use the mdadm tool to remove the HD from the array, remove the HD, add a new one and rebuild your array. You can do this while your machine is in production mode. I believe you are unable to do that with a mobo pseudo-raid ?
You forgot to mention that mdadm is CRAP. I will de-sync the array for no reason and there is no way to force a rebuild. You will end up having to rebuild the array. Raid 0 & 5 will suffer full data loss.
Also, the md RAID webpage *REALLY* instills confidence: http://cgi.cse.unsw.edu.au/~neilb/SoftRaid
That page is 4 years out of date, so maybe a current page will give you a little more confidence? http://linux-raid.osdl.org/index.php/Main_Page Note that it hosted by OSDL which gives me some confidence in and of itself, but I admit to using 3ware cards for my raid needs. FYI: Neil Brown (the md maintainer) is a well known kernel developer. Greg -- Greg Freemyer Litigation Triage Solutions Specialist http://www.linkedin.com/in/gregfreemyer First 99 Days Litigation White Paper - http://www.norcrossgroup.com/forms/whitepapers/99%20Days%20whitepaper.pdf The Norcross Group The Intersection of Evidence & Technology http://www.norcrossgroup.com -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On Mon, Sep 22, 2008 at 12:38 PM, Greg Freemyer
That page is 4 years out of date, so maybe a current page will give you a little more confidence?
No because I've already "been there" and "done that" -- It is IMPOSSIBLE to recover a RAID-1 array that just one day (after properly shutting down the system and not having hardware failures) decided not to boot. The funny thing is there was a system with a similar software setup (openSUSE 11.0 w/ no GUI) and hardware (same mainboard.. ASUS P4P) that a week later had the same exact failure, again the hard disks tested fine, the data on the individual partitions could be read, just that mdadm did not want to work. Maybe the issue is not mdadm but instead openSUSE 11.0? Either way I'd advise to stay away from md RAID and its horrible mdadm tool. For me I prefer to switch to hardware RAID cards and keep on using openSUSE. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
2008/9/22 Andrew Joakimsen
On Mon, Sep 22, 2008 at 12:38 PM, Greg Freemyer
wrote:
<snip>
No because I've already "been there" and "done that" -- It is IMPOSSIBLE to recover a RAID-1 array that just one day (after properly shutting down the system and not having hardware failures) decided not to boot.
Ahhh, now this is interesting :) The main reason I was creating the RAID1 array was so that if one drive failed, I could receive a message from mdadm, swop out the bad drive and continue working, without losing data. In view of the above comment, I guess it is time I ordered the 3rd HDD and rather setup RAID5. <snip>
Maybe the issue is not mdadm but instead openSUSE 11.0? Either way I'd advise to stay away from md RAID and its horrible mdadm tool. For me I prefer to switch to hardware RAID cards and keep on using openSUSE.
HW RAID cards are a MAJOR expense for a home hobbyist who just likes to make sure their data is retrievablein the event of a HDD crash. Regards Hylton -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
Hylton Conacher (ZR1HPC) wrote:
2008/9/22 Andrew Joakimsen
: On Mon, Sep 22, 2008 at 12:38 PM, Greg Freemyer
wrote: <snip>
No because I've already "been there" and "done that" -- It is IMPOSSIBLE to recover a RAID-1 array that just one day (after properly shutting down the system and not having hardware failures) decided not to boot.
Ahhh, now this is interesting :) The main reason I was creating the RAID1 array was so that if one drive failed, I could receive a message from mdadm, swop out the bad drive and continue working, without losing data.
In view of the above comment, I guess it is time I ordered the 3rd HDD and rather setup RAID5.
No Hylton. Andrew just had one bad experience. You can use Soft RAID1 with no problem. I use both RAID1 and RAID5 on several machines and, if a drive "dies", I get a message from mdadm stating it. I can also change the faulty HD with no downtime at all ( depending on your hardware ). IMO, you should take Andrew's experience as Linux Software RAID is not perfect. Well... nothing is... but IMO, it is very close to it :)
<snip>
Maybe the issue is not mdadm but instead openSUSE 11.0? Either way I'd advise to stay away from md RAID and its horrible mdadm tool. For me I prefer to switch to hardware RAID cards and keep on using openSUSE.
HW RAID cards are a MAJOR expense for a home hobbyist who just likes to make sure their data is retrievablein the event of a HDD crash.
Regards Hylton
-- Rui Santos http://www.ruisantos.com/ Veni, vidi, Linux! -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On Tue, Sep 23, 2008 at 12:49 PM, Rui Santos
Hylton Conacher (ZR1HPC) wrote:
2008/9/22 Andrew Joakimsen
: On Mon, Sep 22, 2008 at 12:38 PM, Greg Freemyer
wrote: <snip>
No because I've already "been there" and "done that" -- It is IMPOSSIBLE to recover a RAID-1 array that just one day (after properly shutting down the system and not having hardware failures) decided not to boot.
Ahhh, now this is interesting :) The main reason I was creating the RAID1 array was so that if one drive failed, I could receive a message from mdadm, swop out the bad drive and continue working, without losing data.
In view of the above comment, I guess it is time I ordered the 3rd HDD and rather setup RAID5.
No Hylton. Andrew just had one bad experience. You can use Soft RAID1 with no problem. I use both RAID1 and RAID5 on several machines and, if a drive "dies", I get a message from mdadm stating it. I can also change the faulty HD with no downtime at all ( depending on your hardware ). IMO, you should take Andrew's experience as Linux Software RAID is not perfect. Well... nothing is... but IMO, it is very close to it :)
Hi I decided to but in at this point because one important side of the issue isn't vieuwed and a question not asked to Andrew and Rui: What brand of harddisks did you use? I cannot imagine that the warnings of mdadm are NOT suplier dependant. (Since I want to experiment a bit more with soft raid next system this is verry important to me) Neil -- There are three kinds of people: Those who can count, and those who cannot count ----------------------------------------------------------------------- ** Hi! I'm a signature virus! Copy me into your signature, please! ** ----------------------------------------------------------------------- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 The Tuesday 2008-09-23 at 13:15 +0200, Neil wrote:
I decided to but in at this point because one important side of the issue isn't vieuwed and a question not asked to Andrew and Rui: What brand of harddisks did you use? I cannot imagine that the warnings of mdadm are NOT suplier dependant. (Since I want to experiment a bit more with soft raid next system this is verry important to me)
AFAIK, they aren't. - -- Cheers, Carlos Robinson -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.9 (GNU/Linux) iEYEARECAAYFAkjY2TUACgkQtTMYHG2NR9VRDQCfX9Cjr56/2O07xpmDpeXFBHcN vnYAn2/wj596fwFdOkTQgnq/Ybo47GmG =0Ket -----END PGP SIGNATURE----- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
Neil wrote:
No Hylton. Andrew just had one bad experience. You can use Soft RAID1 with no problem. I use both RAID1 and RAID5 on several machines and, if a drive "dies", I get a message from mdadm stating it. I can also change the faulty HD with no downtime at all ( depending on your hardware ). IMO, you should take Andrew's experience as Linux Software RAID is not perfect. Well... nothing is... but IMO, it is very close to it :)
Hi
I decided to but in at this point because one important side of the issue isn't vieuwed and a question not asked to Andrew and Rui: What brand of harddisks did you use?
Mainly Seagate, but also Maxtor and WD. Mainly SATA, but also SAS and SCSI.
I cannot imagine that the warnings of mdadm are NOT suplier dependant. (Since I want to experiment a bit more with soft raid next system this is verry important to me)
As Carlos also stated, AFAIK, they are not. Also, with this newer SATA drives, the problems related to HD reduced when compared to PATA drives. Many users, including myself, had problems with it. Having HD's in both Master and Slave channel is not a good thing. After reading about it, I decided to buy an extra Adapter and place a single PATA HD on each channel. After that, my problems went way until this very day. I still have a machine running a RAID5 on a 24/7 basis with 4 PATA HD. The only two times I got warning from mdadm, was when the drives actually died... As I stated... I have a great confidence in Linux Raid.
Neil
Rui
-- Rui Santos http://www.ruisantos.com/ Veni, vidi, Linux! -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 The Tuesday 2008-09-23 at 12:27 +0200, Hylton Conacher (ZR1HPC) wrote:
No because I've already "been there" and "done that" -- It is IMPOSSIBLE to recover a RAID-1 array that just one day (after properly shutting down the system and not having hardware failures) decided not to boot.
Ahhh, now this is interesting :) The main reason I was creating the RAID1 array was so that if one drive failed, I could receive a message from mdadm, swop out the bad drive and continue working, without losing data.
Of course you can continue. You can even have a third disk as active spare and the system will switch over to it without downtime. It was only Andrew who had some problem, which at the moment is not fully clear what it was.
In view of the above comment, I guess it is time I ordered the 3rd HDD and rather setup RAID5.
You could have the same type of problem. Both raid 1 and raid 5 can withstand the failure of a single disk without downtime, with different approaches. It will depend on the hardware of course, whether you can replace the disk "hot".
HW RAID cards are a MAJOR expense for a home hobbyist who just likes to make sure their data is retrievablein the event of a HDD crash.
In fact, I don't think you need raid at all. Raid is for minimizing the downtime to zero, for systems that have to be accessible full time. That is not usually the case for home, unless you want to experiment. IMO it is safer to have that second disk as a full backup, updated at least daily via rsync. The backup has an advantage: you can recover from a software crash or finger error: the other disk is not mounted, so nothing is written to it "yet". On a raid, both copies would have the same wrong data. - -- Cheers, Carlos Robinson -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.9 (GNU/Linux) iEYEARECAAYFAkjY20UACgkQtTMYHG2NR9XOewCcCDKNRt1EUiB6atlYNqw0NHp7 OzsAoIVbipCGYRXIRrBkE5Y+82aVOzVy =FMRS -----END PGP SIGNATURE----- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On Tue, Sep 23, 2008 at 6:27 AM, Hylton Conacher (ZR1HPC)
In view of the above comment, I guess it is time I ordered the 3rd HDD and rather setup RAID5.
<snip>
HW RAID cards are a MAJOR expense for a home hobbyist who just likes to make sure their data is retrievablein the event of a HDD crash.
FYI only: A 2-port 3Ware card is comparable in cost to a third drive. They have good Linux support, and can be booted from. And the 3Ware are true hardware raid, not fakeraid. For IDE: 3Ware Escalade 7006-2 Low-Profile 2-Port ATA RAID Controller -$110 at NewEgg For SATA: 3ware 8006-2LP PCI 64-bit SATA 2-port SATA - $125 at NewEgg If you google you can save some money buying them used. I've had 20 or more 2-port IDE 3ware controllers in use for 5+ years. Not a failure on any of them so I would consider used if I was pinching pennies. Greg -- Greg Freemyer Litigation Triage Solutions Specialist http://www.linkedin.com/in/gregfreemyer First 99 Days Litigation White Paper - http://www.norcrossgroup.com/forms/whitepapers/99%20Days%20whitepaper.pdf The Norcross Group The Intersection of Evidence & Technology http://www.norcrossgroup.com -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
----- Original Message -----
From: "Hylton Conacher (ZR1HPC)"
2008/9/22 Andrew Joakimsen
: On Mon, Sep 22, 2008 at 12:38 PM, Greg Freemyer
wrote: <snip>
No because I've already "been there" and "done that" -- It is IMPOSSIBLE to recover a RAID-1 array that just one day (after properly shutting down the system and not having hardware failures) decided not to boot.
Ahhh, now this is interesting :) The main reason I was creating the RAID1 array was so that if one drive failed, I could receive a message from mdadm, swop out the bad drive and continue working, without losing data.
In view of the above comment, I guess it is time I ordered the 3rd HDD and rather setup RAID5.
You can get a message when a drive fails if you enable the mdadmind service, and you can boot up a raid1, 5 or 10 with one drive dead or gone, no problem. It is more finnicky than hardware raid in that it's up to you to ensure that /boot and the mbr are always either on a single plain drive or on raid1 or raid10, and it's up to you to ensure that if your normal booting drive happens to be the one that dies, that the other drives also have a valid mbr and active partition so that when the motherboard bios falls back to boot from some other drive, it can. I ensure both of those requirements just by putting this in /etc/grub.conf, (8-drive system with a small /boot partition on each drive, all 8 in raid1, so, all 8 partitions are identical copies of each other.) setup (hd0) (hd0,0) setup (hd1) (hd1,0) setup (hd2) (hd2,0) setup (hd3) (hd3,0) setup (hd4) (hd4,0) setup (hd5) (hd5,0) setup (hd6) (hd6,0) setup (hd7) (hd7,0) quit That's it. That causes grub to go into the mbr of each drive, and tells grub that /boot is the first partition on each drive. This edit, together with the fact that /boot is a raid1 including all 8 1st partitions, ensures that any drive may boot the system. I must do this edit manually and outside of yast, because yast refuses to leave it alone even when I use the expert option to manually edit the same file via yast. But, if you edit it outside of yast and then don't touch the bootloader dialogs in yast, then yast does leave the file alone even when doing kernel upgrades (in which yast edits menu.lst automatically) and running mkinitrd. These are issues you don't have to worry about with hardware raid, but I don't consider them burdensome. The advantage goes further than the cost of one hardware raid card. The advantage is you can do, redo, copy, or fix this anywhere any time on any machine. If you had all the money in the world it's still a pain dealing with hardware raid simply because grocery stores don't sell hardware raid cards, nor does any local store. If there is an electronics or computer shop anywhere near you that sells even one hardware raid card you are extrememly lucky, and then you almost certainly don't have a selection but are stuck with whatever they have. Think 3ware are junk like me? Oh well order the adaptec or lsi on line and wait... Think adaptec are over priced? Oh well order the 3ware on line and wait... You need pci and they only have pci-e? oh well etc etc There will always be some special cases where it's possible to trick the system into failing by some unfortunate coincident sequence of events. But thats true for hardware raid too. -- Brian K. White brian@aljex.com http://www.myspace.com/KEYofR +++++[>+++[>+++++>+++++++<<-]<-]>>+.>.+++++.+++++++.-.[>+<---]>++. filePro BBx Linux SCO FreeBSD #callahans Satriani Filk! -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
It is more finnicky than hardware raid in that it's up to you to ensure that /boot and the mbr are always either on a single plain drive or on raid1 or raid10,
Error. Sorry. Plain drive or raid1, no raid10 or anything else. -- Brian K. White brian@aljex.com http://www.myspace.com/KEYofR +++++[>+++[>+++++>+++++++<<-]<-]>>+.>.+++++.+++++++.-.[>+<---]>++. filePro BBx Linux SCO FreeBSD #callahans Satriani Filk! -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
----- Original Message -----
From: "Andrew Joakimsen"
On Mon, Sep 22, 2008 at 5:52 AM, Rui Santos
wrote: 3) Interoperability. With most Intel's chipsets you can hotplug your HD's. Just use the mdadm tool to remove the HD from the array, remove the HD, add a new one and rebuild your array. You can do this while your machine is in production mode. I believe you are unable to do that with a mobo pseudo-raid ?
You forgot to mention that mdadm is CRAP. I will de-sync the array for no reason and there is no way to force a rebuild. You will end up having to rebuild the array. Raid 0 & 5 will suffer full data loss.
Only partially true. It's perfectly possible to force a rebuild. In fact, you can force rebuilds in mdadm in situations where no firmware raid will ever let you. If you don't know how, that's a you problem not an mdadm problem. As for de-sync for no reason, yes, that is weakness of linux's software raid. Aside from it happening to me in an empirically proveable way, when the question was posed to engineers from adaptec and lsi, without pre-loading their thoughts by describing any symptoms, they predicted exactly the symptoms I was getting. I got this 2nd hand from a hardware vendor and system builder (Seneca Data) so it's fuzzy talk because the engineers were trying to talk layman, and the guy at Seneca was only barely able to follow, but the gist was that at the low level, linux handles the disks differently and is quicker to assume a disk is bad or that a particular operation has failed, where all the hardware raid cards take more active control of the disks and are more forgiving of transient disk (mis)behavior, such as an op not completeing within a certain time window or an op failing once but succeeding if simply immediately repeated. This theory turns out to exactly agree with behaviour I saw on a set of 10 identical servers that started out with 8 sata drives each, hooked up as 4 on the motherboard sata controller (nvidia) and 4 on a pci-e LSI card. All plain sata, no hardware raid or fake-raid. All boxes were loaded up with exactly the same software and configuration via autoyast / autoinst.xml opensuse 10.3 i386 with software raid0:swap, raid1:/boot and raid5:/ All boxes had randomly failed drives, some boxes couldn't even finish the install process before at least one drive went bad, others would run a few days and then have one or more failures under no load, only 2 of them never had a problem. The first few drives of course I tried actually swapping in new drives and rebuilding, other times I just forced the existing drives to rebuild (yes, contrary to your claims, it' perfectly possible and works fine). Then I tried moving drives around to see if there were perhaps flaky hot-swap bays or connectors. Then I tried raid10 instead of 5. After several weeks of this and after hearing the adaptec engineers theory we decided to take a chance and buy 10 adaptec 3805 pci-e raid cards. Reinstalled the exact same OS and configuration aside from using aacraid instead of libata and mdraid, onto the exact same drives, same backplanes and hotswap bays and the rest of the server. Same power and cooling environment even... and never had even one single problem on any drive on any server even once since then, and now they are all in heavy production for several months. So, clearly the drives weren't really bad, yet linux software raid marked them failed left & right. However, it's also true that only certain hardware combinations may tickle this software weakness. I have several other machines in heavy production using purely software reaid, sometimes raid 5 sometimes 10, that have been cranking away for a couple of years now without a blip. They are using different low level hardware and drivers, and sometimes different (2 years old) versions of linux. So, mdraid isn't necessarily "crap" , it just has compatibility quirks like everything else on the planet. And as for recommending to use or avoid it, as I said before, yes, you are making the right call that you should probably not use it. Other people however should make their own call based on something other than the fact that you don't know how to use mdadm. Just like I should probably not attempt to fly a helicopter. They are complicated and take a long time to learn to use, and I don't know how to fly them, yet I'm pretty sure they aren't just "crap" as a whole class. I can, and have, fix problems in mdadm that no hardware raid will even let you think about. You can set up raid arrangements that no hardware raid can possibly do. You can perform operations on live running systems that no hardware raid array can possibly do. A software raid array can be copid and run on any hardware linux supports. (In fact, you can do that, while mounted, live & running. Try moving a hardware raid array with mounted filesystems from a 3ware card to an nfs share, without any interruption.) Crap is your opinion and you're free to express it, but you should stop claiming that things are impossible just because you couldn't figure it out or didn't want to spend the necessary time learning, which in this case pretty much requires experimenting and testing in a methodical manner, not just reading the mdadm man page, though It definitely starts with that. Brian K. White brian@aljex.com http://www.myspace.com/KEYofR +++++[>+++[>+++++>+++++++<<-]<-]>>+.>.+++++.+++++++.-.[>+<---]>++. filePro BBx Linux SCO FreeBSD #callahans Satriani Filk! -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On Mon, Sep 22, 2008 at 3:27 PM, Brian K. White
It's perfectly possible to force a rebuild. In fact, you can force rebuilds in mdadm in situations where no firmware raid will ever let you. If you don't know how, that's a you problem not an mdadm problem.
I know how and issue the right command. It say /dev/sdb3 or whatnot DOES NOT EXSIST. But if you do ll /dev/sdb3 or even cat /dev/sdb3 the device is obviously there. So yes mdadm is crap and should never be used. If you need to do mdadm /dev/md0 --fail /dev/sdb3 and it say sdb3 do not exist there is a serious issues of the developers piping their toilets into their code. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
----- Original Message -----
From: "Andrew Joakimsen"
On Mon, Sep 22, 2008 at 3:27 PM, Brian K. White
wrote: It's perfectly possible to force a rebuild. In fact, you can force rebuilds in mdadm in situations where no firmware raid will ever let you. If you don't know how, that's a you problem not an mdadm problem.
I know how and issue the right command. It say /dev/sdb3 or whatnot DOES NOT EXSIST.
But if you do ll /dev/sdb3 or even cat /dev/sdb3 the device is obviously there.
So yes mdadm is crap and should never be used. If you need to do mdadm /dev/md0 --fail /dev/sdb3 and it say sdb3 do not exist there is a serious issues of the developers piping their toilets into their code.
Wrong. (Unless you can supply enough exact commands and responses and other observations to prove your diagnostic process and deductions aren't full of holes. You have not done so above.) I have seen a few different things that each were different problems, yet each could have been described roughly as above, and yet in each case the drive was not actually unavailable and all desired operations were able to be performed somehow. The exact steps varied in each case because the exact problem varied in each case. I don't know which of the exact problems you actually had, because as I said, there was just in my own little experience more than one way to get something roughly like that, so I can't say what exactly you could or should have done that would have worked. This all assumes good hardware btw. A buggy disk or controller could actually make a disk appear bad and then later good again or good then lock up etc.. As far as I'm concerned, you could have bad hardware even. You are saying something doesn't work, but you are not showing your deductive process and so the claim is meaningless. Send me your problem disks that you think are impossible to assemble and I bet in a little while I can tell you how to assemble the array as long as there actually is enough there to use. (if you did something stupid and blew away metadata that can't be recreated or inferred, well no hardware raid card will save you from that either.) And I'm not even slightly an mdadm guru. I simply spent a good solid weekend and then several smaller incidents experimenting. I would say it's still black art to me. But even at this level I already have actually performed actions you claim are impossible, and have seen symptoms like you decribe above, except I looked at the problem longer than 13 seconds and discovered the problem was not as it seemed and that it was prfectly solvable in every case so far. That includes those 10 boxes I was talking about. The disks kept failing randomly, but it was always possible to rebuild and rejoin them. It sometimes took some poking and insight. I'm not saying it was always obvious what to do or why. Just that it always turned out to be do-able even when it looked impossible based on the first and most obvious commands. So far my assertion stands. You should not expect mdraid to work for you, but that has no bearing on other people or on mdraid itself. You are merely saying that because you don't know how to fly helicopters, that helicopters are garbage. I wish any of my machines had any problem right now so I coud show exact commands myself, but they don't. Including all those impossible mdraid boxes. Brian K. White brian@aljex.com http://www.myspace.com/KEYofR +++++[>+++[>+++++>+++++++<<-]<-]>>+.>.+++++.+++++++.-.[>+<---]>++. filePro BBx Linux SCO FreeBSD #callahans Satriani Filk! -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 The Monday 2008-09-22 at 17:59 -0400, Brian K. White wrote:
It's perfectly possible to force a rebuild. In fact, you can force rebuilds in mdadm in situations where no firmware raid will ever let you. If you don't know how, that's a you problem not an mdadm problem.
I know how and issue the right command. It say /dev/sdb3 or whatnot DOES NOT EXSIST.
But if you do ll /dev/sdb3 or even cat /dev/sdb3 the device is obviously there.
So yes mdadm is crap and should never be used. If you need to do mdadm /dev/md0 --fail /dev/sdb3 and it say sdb3 do not exist there is a serious issues of the developers piping their toilets into their code.
Wrong. (Unless you can supply enough exact commands and responses and other observations to prove your diagnostic process and deductions aren't full of holes. You have not done so above.)
He did, you know:
] Date: Fri, 29 Aug 2008 19:00:52 -0400
] From: Andrew Joakimsen <j>
] To: OpenSuSE Discussion Group
On Mon, Sep 22, 2008 at 6:27 PM, Carlos E. R.
But he did not wait enough for answers or did not apply them, and reinstalled, blaming mdadm.
I applied everything I could get my hands on. You dont think I tried every single posting? Nothing works. I spent two full weekends (saturday and sunday) and I certainly had more insight into it when the second system, after correctly halting, would not turn back on again! How do you explain that? Shutdown, wait for it to turn off, swap a PCI card in the system, turn it back on and the RAID does not work? If the system was correctly shut down I would suspect the drives or the controller. The controller in that system is fine, only the drives were new. The manufacturer long test shows no errors. The only other new factor is the PCI card, which was replacing an identical (defective) card and the md RAID. PCI Win Modem cards dont cause md RAID to think it is corrupted. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On Mon, Sep 22, 2008 at 5:59 PM, Brian K. White
----- Original Message ----- From: "Andrew Joakimsen"
To: "Brian K. White" Cc: Sent: Monday, September 22, 2008 4:05 PM Subject: Re: [opensuse] 10.2 no RAID to 11.0 RAID 1 On Mon, Sep 22, 2008 at 3:27 PM, Brian K. White
wrote: It's perfectly possible to force a rebuild. In fact, you can force rebuilds in mdadm in situations where no firmware raid will ever let you. If you don't know how, that's a you problem not an mdadm problem.
I know how and issue the right command. It say /dev/sdb3 or whatnot DOES NOT EXSIST.
But if you do ll /dev/sdb3 or even cat /dev/sdb3 the device is obviously there.
So yes mdadm is crap and should never be used. If you need to do mdadm /dev/md0 --fail /dev/sdb3 and it say sdb3 do not exist there is a serious issues of the developers piping their toilets into their code.
Wrong. (Unless you can supply enough exact commands and responses and other observations to prove your diagnostic process and deductions aren't full of holes. You have not done so above.)
I still have the drives. I am still looking for real instructions on how to use mdadm. One of the "step by step" guides even show one of the errors as normal output! So I figure what the hell let me continue anyways and of course it did not work.
I have seen a few different things that each were different problems, yet each could have been described roughly as above, and yet in each case the drive was not actually unavailable and all desired operations were able to be performed somehow. The exact steps varied in each case because the exact problem varied in each case. I don't know which of the exact problems you actually had, because as I said, there was just in my own little experience more than one way to get something roughly like that, so I can't say what exactly you could or should have done that would have worked.
Ah, so there is no universal test case. There has to be. Let's assume one drive is "bad" what then is the correct way to indicate this through mdadm and start the now "degraded" RAID-1 array?
This all assumes good hardware btw. A buggy disk or controller could actually make a disk appear bad and then later good again or good then lock up etc.. As far as I'm concerned, you could have bad hardware even. You are saying something doesn't work, but you are not showing your deductive process and so the claim is meaningless. Send me your problem disks that you think are impossible to assemble and I bet in a little while I can tell you how to assemble the array as long as there actually is enough there to use. (if you did something stupid and blew away metadata that can't be recreated or inferred, well no hardware raid card will save you from that either.)
All I can say is the systems have an ASUS P4P800-VM mainboard (Intel 865G chip set). They ran Fedora for 2 years and then I replaced the hard drives and installed openSUSE on md RAID. Two systems physically 20 miles apart the same thing happens to. The hard drive manufacturer long test "passed" on all four drives. The fact that I can mount each of the partitions that made up /dev/md0 and the md5 of all important files on the system on both partitions (and just the fact that I can read the data off the individual partitions) further shows that is is not a hardware issue. I still have the drives, if I am wrong I have no problem admitting it.
And I'm not even slightly an mdadm guru. I simply spent a good solid weekend and then several smaller incidents experimenting. I would say it's still black art to me. But even at this level I already have actually performed actions you claim are impossible, and have seen symptoms like you decribe above, except I looked at the problem longer than 13 seconds and discovered the problem was not as it seemed and that it was prfectly solvable in every case so far. That includes those 10 boxes I was talking about. The disks kept failing randomly, but it was always possible to rebuild and rejoin them. It sometimes took some poking and insight. I'm not saying it was always obvious what to do or why. Just that it always turned out to be do-able even when it looked impossible based on the first and most obvious commands.
So far my assertion stands. You should not expect mdraid to work for you, but that has no bearing on other people or on mdraid itself. You are merely saying that because you don't know how to fly helicopters, that helicopters are garbage.
Prove me wrong. Because noone has been able to provide the proper commands to rebuild an array. There is no documentation on how to do it, the man page is vague and the commands dont work correctly. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 The Monday 2008-09-22 at 18:51 -0400, Andrew Joakimsen wrote:
Prove me wrong. Because noone has been able to provide the proper commands to rebuild an array. There is no documentation on how to do it, the man page is vague and the commands dont work correctly.
There is a howto, and it is included with the distro. - -- Cheers, Carlos E. R. -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.9 (GNU/Linux) iEYEARECAAYFAkjYI0QACgkQtTMYHG2NR9W+igCfUT3ya029JMCf9GXB9XMy2hDL tW4An12OB59HVdsadGkQQR3z3c7x2Zjj =3/N2 -----END PGP SIGNATURE----- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On what page of the manual? I never saw it....
On Mon, Sep 22, 2008 at 6:59 PM, Carlos E. R.
The Monday 2008-09-22 at 18:51 -0400, Andrew Joakimsen wrote:
Prove me wrong. Because noone has been able to provide the proper commands to rebuild an array. There is no documentation on how to do it, the man page is vague and the commands dont work correctly.
There is a howto, and it is included with the distro.
-- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 The Monday 2008-09-22 at 19:01 -0400, Andrew Joakimsen wrote:
There is a howto, and it is included with the distro.
On what page of the manual? I never saw it....
The howtos are not part of the manuals, they are independent, and often by different authors: cer@nimrodel:~> ls /usr/share/doc/howto/en/txt/ | grep -i raid ATA-RAID-HOWTO.gz Antares-RAID-sparcLinux-HOWTO.gz Boot+Root+Raid+LILO.gz DPT-Hardware-RAID-HOWTO.gz Linux-Promise-RAID1-HOWTO.gz Root-RAID-HOWTO.gz Software-RAID-0.4x-HOWTO.gz Software-RAID-HOWTO.gz The last one is the one I mean. You have to install the howtos first (txt or html versions), or read them directly in internet (TLDP, I think). - -- Cheers, Carlos E. R. -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.9 (GNU/Linux) iEYEARECAAYFAkjYKF4ACgkQtTMYHG2NR9WToACeLLPq6YF/bOMWs6Ps+HI6wG68 FQ8Anidcuiv0qhmjGt5WhSiZa45inLvL =CqFd -----END PGP SIGNATURE----- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On Monday 22 September 2008 16:21, Carlos E. R. wrote:
The Monday 2008-09-22 at 19:01 -0400, Andrew Joakimsen wrote:
There is a howto, and it is included with the distro.
On what page of the manual? I never saw it....
The howtos are not part of the manuals, they are independent, and often by different authors:
cer@nimrodel:~> ls /usr/share/doc/howto/en/txt/ | grep -i raid ATA-RAID-HOWTO.gz Antares-RAID-sparcLinux-HOWTO.gz Boot+Root+Raid+LILO.gz DPT-Hardware-RAID-HOWTO.gz Linux-Promise-RAID1-HOWTO.gz Root-RAID-HOWTO.gz Software-RAID-0.4x-HOWTO.gz Software-RAID-HOWTO.gz
The last one is the one I mean. You have to install the howtos first (txt or html versions), or read them directly in internet (TLDP, I think).
And do we still cling to the notion that local content index and search is unnecessary?? And if you do, wait until you collect 5 GB of PDF, gzip-compressed PostScript and HTML documents, as I have...
-- Cheers, Carlos E. R.
Randall Schulz -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 The Monday 2008-09-22 at 17:45 -0700, Randall R Schulz wrote:
On Monday 22 September 2008 16:21, Carlos E. R. wrote:
The Monday 2008-09-22 at 19:01 -0400, Andrew Joakimsen wrote:
There is a howto, and it is included with the distro.
On what page of the manual? I never saw it....
The howtos are not part of the manuals, they are independent, and often by different authors:
cer@nimrodel:~> ls /usr/share/doc/howto/en/txt/ | grep -i raid ATA-RAID-HOWTO.gz Antares-RAID-sparcLinux-HOWTO.gz Boot+Root+Raid+LILO.gz DPT-Hardware-RAID-HOWTO.gz Linux-Promise-RAID1-HOWTO.gz Root-RAID-HOWTO.gz Software-RAID-0.4x-HOWTO.gz Software-RAID-HOWTO.gz
The last one is the one I mean. You have to install the howtos first (txt or html versions), or read them directly in internet (TLDP, I think).
And do we still cling to the notion that local content index and search is unnecessary??
What do you mean? I said nothing about indexing or searching.
And if you do, wait until you collect 5 GB of PDF, gzip-compressed PostScript and HTML documents, as I have...
The howtos are less than 10 megas. - -- Cheers, Carlos E. R. -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.9 (GNU/Linux) iEYEARECAAYFAkjYPrMACgkQtTMYHG2NR9UH8gCff5tL6Q/wcyt39fX3F035bWGR hzcAn1kxJsnWBRPyhO38C3z1X77bbcBy =NPZy -----END PGP SIGNATURE----- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On Monday 22 September 2008 17:56, Carlos E. R. wrote:
The Monday 2008-09-22 at 17:45 -0700, Randall R Schulz wrote:
On Monday 22 September 2008 16:21, Carlos E. R. wrote:
The Monday 2008-09-22 at 19:01 -0400, Andrew Joakimsen wrote:
There is a howto, and it is included with the distro.
On what page of the manual? I never saw it....
The howtos are not part of the manuals, they are independent, and often by different authors:
cer@nimrodel:~> ls /usr/share/doc/howto/en/txt/ | grep -i raid ATA-RAID-HOWTO.gz Antares-RAID-sparcLinux-HOWTO.gz Boot+Root+Raid+LILO.gz DPT-Hardware-RAID-HOWTO.gz Linux-Promise-RAID1-HOWTO.gz Root-RAID-HOWTO.gz Software-RAID-0.4x-HOWTO.gz Software-RAID-HOWTO.gz
...
And do we still cling to the notion that local content index and search is unnecessary??
What do you mean? I said nothing about indexing or searching.
No, you didn't, and I wasn't really addressing you, but rather the person who (at least possibly) had the information they needed on their very own disk but was unaware of it and (presumably) had no real way to find it, lacking specific knowledge of its existence and whereabouts. This goes to the oft-repeated assertion that "locate", "find" and maybe "grep" are all one needs to manage content on a desktop or workstation installation of Linux. I cannot agree, and I think this is just one example coming to the surface at the moment that refutes that claim.
And if you do, wait until you collect 5 GB of PDF, gzip-compressed PostScript and HTML documents, as I have...
The howtos are less than 10 megas.
Of course. And if they were the only documents on a system, one could indeed just use grep. But who does not accumulate a lot of documentation (and / or email, for that matter), whether part of the distribution or separately acquired, that would not benefit from being indexed and searchable, á là Beagle, Google Desktop or other document indexing system?
-- Cheers, Carlos E. R.
Randall Schulz -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 The Monday 2008-09-22 at 18:05 -0700, Randall R Schulz wrote:
...
And do we still cling to the notion that local content index and search is unnecessary??
What do you mean? I said nothing about indexing or searching.
No, you didn't, and I wasn't really addressing you, but rather the person who (at least possibly) had the information they needed on their very own disk but was unaware of it and (presumably) had no real way to find it, lacking specific knowledge of its existence and whereabouts.
This goes to the oft-repeated assertion that "locate", "find" and maybe "grep" are all one needs to manage content on a desktop or workstation installation of Linux. I cannot agree, and I think this is just one example coming to the surface at the moment that refutes that claim.
Ah! :-) I understand now. Yep. Heh, this particular howto it was my memory, not a search tool ;-)
And if you do, wait until you collect 5 GB of PDF, gzip-compressed PostScript and HTML documents, as I have...
The howtos are less than 10 megas.
Of course. And if they were the only documents on a system, one could indeed just use grep. But who does not accumulate a lot of documentation (and / or email, for that matter), whether part of the distribution or separately acquired, that would not benefit from being indexed and searchable, á là Beagle, Google Desktop or other document indexing system?
True enough. Howeer, beagle on my system doesn't seem to find the documents I want... I don't deactivate it because of the cpu it uses, but because it did not find the proper things, which is a pity. I'll give it another chance. - -- Cheers, Carlos E. R. -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.9 (GNU/Linux) iEYEARECAAYFAkjYvtwACgkQtTMYHG2NR9WciQCeOAPpWyyYHem1RSi2XeIwMju5 BXUAni36VQ3WHlaWW5DcKL0uuy1k6+JO =S79s -----END PGP SIGNATURE-----
On Tuesday 23 September 2008 03:03, Carlos E. R. wrote:
...
The howtos are less than 10 megas.
Of course. And if they were the only documents on a system, one could indeed just use grep. But who does not accumulate a lot of documentation (and / or email, for that matter), whether part of the distribution or separately acquired, that would not benefit from being indexed and searchable, á là Beagle, Google Desktop or other document indexing system?
True enough. Howeer, beagle on my system doesn't seem to find the documents I want... I don't deactivate it because of the cpu it uses, but because it did not find the proper things, which is a pity.
I'll give it another chance.
For what it's worth, I use Google Desktop on my main system, which is running 10.0, and it seems to do pretty well. It's quite measured in its use of resources when indexing, though I have found it running uncontrollably at 100% CPU once or twice, and had to kill and restart it, which is obviously uncool. I've left Beagle in its default installation state on my 10.3 box, which I use for 3D applications and to run the server for the application I'm developing. Randall Schulz -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
----- Original Message -----
From: "Randall R Schulz"
On Monday 22 September 2008 16:21, Carlos E. R. wrote:
The Monday 2008-09-22 at 19:01 -0400, Andrew Joakimsen wrote:
There is a howto, and it is included with the distro.
On what page of the manual? I never saw it....
The howtos are not part of the manuals, they are independent, and often by different authors:
cer@nimrodel:~> ls /usr/share/doc/howto/en/txt/ | grep -i raid ATA-RAID-HOWTO.gz Antares-RAID-sparcLinux-HOWTO.gz Boot+Root+Raid+LILO.gz DPT-Hardware-RAID-HOWTO.gz Linux-Promise-RAID1-HOWTO.gz Root-RAID-HOWTO.gz Software-RAID-0.4x-HOWTO.gz Software-RAID-HOWTO.gz
The last one is the one I mean. You have to install the howtos first (txt or html versions), or read them directly in internet (TLDP, I think).
And do we still cling to the notion that local content index and search is unnecessary??
And if you do, wait until you collect 5 GB of PDF, gzip-compressed PostScript and HTML documents, as I have...
The argument is that the way the math works out for me, with indexing, I suffer slowness 100% of the time in order to get a speed up 1% of the time. That math is backwards to me, and it's far worse than merely 100 to 1 in reality. I would rather have my machine as fast as possible 100% of the time, and have to go looking for something the hard way 1% of the time. The rest of the time, ordinary organization will let me find a program or document immediately without having to rely on an indexing and search system. If I have a library of documents or other too-large-for-that mass of data, then of course I place them in a library type application or database which has indexing and searching, but it rarely has to rebuild indexes and search constantly for random changes in the data. Whenever any data is added or changed, any relevant indexes are surgically updated the same way the database itself is with the payload data. By contrast a desktop indexer has to constantly search for all the random changes I may make to the directories within it's scope. Reiser4 with built in indexing (or via module) may be the answer for that, allowing indexing without constant searching, compiling, colating and index rebuilding. The fs is in essence a db engine and it can maintain indexes the way db engines do. Finally, even with desktop indexing, /usr/share/doc is not within anyones desktop or home directory, so presumably it wouldn't be indexed by these things _anyways_. Other possible scenarios, if these things are configured to index the entire filesystem / all filesystems, then that is automatically horrendous and wrong even if I could be convinced to tolerate them in a home directory. A user or a sysadmin might put anything anywhere and it's patently stupid to allow some indexer to try to search through gigs of irrelevant data. repeatedly. Not to mention merely accessing a file may screw up some other process that watches that files access time. If the indexers only search in a specific list of directories, whether all in their home or including some elsewhere like /usr/share, well if the user can be expected to administer the list of indexed directories, then they already know enough not even to need them. So the benefit is coming down to something like, "Well the user may not know the directories where docs are, but the package or distribution developers have preconfigured the list into the indexer", so by default we should all have our pc's run slow and our drives die sooner so that some user doesn't have to know a short list of likely places to look for docs? That is insane. -- Brian K. White brian@aljex.com http://www.myspace.com/KEYofR +++++[>+++[>+++++>+++++++<<-]<-]>>+.>.+++++.+++++++.-.[>+<---]>++. filePro BBx Linux SCO FreeBSD #callahans Satriani Filk! -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
Brian K. White wrote:
And do we still cling to the notion that local content index and search is unnecessary??
And if you do, wait until you collect 5 GB of PDF, gzip-compressed PostScript and HTML documents, as I have...
The argument is that the way the math works out for me, with indexing, I suffer slowness 100% of the time in order to get a speed up 1% of the time.
That math is backwards to me, and it's far worse than merely 100 to 1 in reality.
I would rather have my machine as fast as possible 100% of the time, and have to go looking for something the hard way 1% of the time.
The rest of the time, ordinary organization will let me find a program or document immediately without having to rely on an indexing and search system.
If I have a library of documents or other too-large-for-that mass of data, then of course I place them in a library type application or database which has indexing and searching, but it rarely has to rebuild indexes and search constantly for random changes in the data. Whenever any data is added or changed, any relevant indexes are surgically updated the same way the database itself is with the payload data. By contrast a desktop indexer has to constantly search for all the random changes I may make to the directories within it's scope.
Reiser4 with built in indexing (or via module) may be the answer for that, allowing indexing without constant searching, compiling, colating and index rebuilding. The fs is in essence a db engine and it can maintain indexes the way db engines do.
Finally, even with desktop indexing, /usr/share/doc is not within anyones desktop or home directory, so presumably it wouldn't be indexed by these things _anyways_. Other possible scenarios, if these things are configured to index the entire filesystem / all filesystems, then that is automatically horrendous and wrong even if I could be convinced to tolerate them in a home directory. A user or a sysadmin might put anything anywhere and it's patently stupid to allow some indexer to try to search through gigs of irrelevant data. repeatedly. Not to mention merely accessing a file may screw up some other process that watches that files access time. If the indexers only search in a specific list of directories, whether all in their home or including some elsewhere like /usr/share, well if the user can be expected to administer the list of indexed directories, then they already know enough not even to need them. So the benefit is coming down to something like, "Well the user may not know the directories where docs are, but the package or distribution developers have preconfigured the list into the indexer", so by default we should all have our pc's run slow and our drives die sooner so that some user doesn't have to know a short list of likely places to look for docs? That is insane.
Brian, I agree 100%... with one proviso or quid pro quo: <quote> The older I get and with more of life stored as electronic information even with my (mind you, very good) 'ordinary organization', the LONGER 'looking for something the hard way 1% of the time' takes! </quote> I haven't found a usable, efficient or well written indexing system yet. However, I do see some interesting things on the horizon. There are a couple of document management systems (dms) being rewritten right not that might hold the key by combining strong lightweight directory indexing with cross-platform access to the stored information. Only time will tell. The one thing I am convinced of however is "the dreaded dog (beagle) type indexing ain't it." -- David C. Rankin, J.D., P.E. Rankin Law Firm, PLLC 510 Ochiltree Street Nacogdoches, Texas 75961 Telephone: (936) 715-9333 Facsimile: (936) 715-9339 www.rankinlawfirm.com -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
I agree 100%... with one proviso or quid pro quo:
<quote>
The older I get and with more of life stored as electronic information even with my (mind you, very good) 'ordinary organization', the LONGER 'looking for something the hard way 1% of the time' takes!
</quote>
Yeah I have to agree with that too. DMS's are coming along. I figure their quality and selection of features and design philosophies will be good enough by the time I need to use them by default rather than on special occasion. A few directories were good enough for my mp3's and pictures for a while, now I use Ampache and Gallery. I figure some mediawiki plugin or something completely else will turn up for random documents by the time I really need it. Emails are the closest thing I have to a problem. I have been saving most emails for years and even so it's still searcheable in a reasonable time. But it's just a huge outlook express store directory. There is a lot of important documentation in there that really would be nice if it was more generically accessible. (yeah yeah I hate ms as much as the next guy, but I need to use windows as my normal desktop and I need my email to work 100% and after the 2nd or 3rd time netscape blew up and destroyed it's email db files and I lost several months or a years worth of email I gave up on 3rd party mail clients on windows, that was so many years ago that it was netscape and eudora but the lesson remains sound.) -- Brian K. White brian@aljex.com http://www.myspace.com/KEYofR +++++[>+++[>+++++>+++++++<<-]<-]>>+.>.+++++.+++++++.-.[>+<---]>++. filePro BBx Linux SCO FreeBSD #callahans Satriani Filk! -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On Tuesday 23 September 2008 14:21, Brian K. White wrote:
...
And do we still cling to the notion that local content index and search is unnecessary??
And if you do, wait until you collect 5 GB of PDF, gzip-compressed PostScript and HTML documents, as I have...
The argument is that the way the math works out for me, with indexing, I suffer slowness 100% of the time in order to get a speed up 1% of the time.
That math is backwards to me, and it's far worse than merely 100 to 1 in reality.
I don't know what indexer you refer to (there are many), but people's experience with Beagle seems mixed, though we mostly hear the gripes in these parts. But as far as the indexer I use, Google Desktop, is concerned, in my experience this is not at all the case. It's demands are modest, metered and sensitive to other system activity. Apart from the couple of times that it has gone into a loop, it has never bother me in any way. On my system Google Desktop currently has indexed: 23,279 Email messages 4,818 Web history pages 94,302 Documents 19,301 Media files 46 "other" It is never obtrusive in its operation. In my view, resistance to indexing the contents of one's personal information cache is perverse.
...
-- Brian K. White
Randall Schulz -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
----- Original Message -----
From: "Andrew Joakimsen"
On Mon, Sep 22, 2008 at 5:59 PM, Brian K. White
wrote: ----- Original Message ----- From: "Andrew Joakimsen"
To: "Brian K. White" Cc: Sent: Monday, September 22, 2008 4:05 PM Subject: Re: [opensuse] 10.2 no RAID to 11.0 RAID 1 On Mon, Sep 22, 2008 at 3:27 PM, Brian K. White
wrote: It's perfectly possible to force a rebuild. In fact, you can force rebuilds in mdadm in situations where no firmware raid will ever let you. If you don't know how, that's a you problem not an mdadm problem.
I know how and issue the right command. It say /dev/sdb3 or whatnot DOES NOT EXSIST.
But if you do ll /dev/sdb3 or even cat /dev/sdb3 the device is obviously there.
So yes mdadm is crap and should never be used. If you need to do mdadm /dev/md0 --fail /dev/sdb3 and it say sdb3 do not exist there is a serious issues of the developers piping their toilets into their code.
Wrong. (Unless you can supply enough exact commands and responses and other observations to prove your diagnostic process and deductions aren't full of holes. You have not done so above.)
I still have the drives. I am still looking for real instructions on how to use mdadm. One of the "step by step" guides even show one of the errors as normal output! So I figure what the hell let me continue anyways and of course it did not work.
I have seen a few different things that each were different problems, yet each could have been described roughly as above, and yet in each case the drive was not actually unavailable and all desired operations were able to be performed somehow. The exact steps varied in each case because the exact problem varied in each case. I don't know which of the exact problems you actually had, because as I said, there was just in my own little experience more than one way to get something roughly like that, so I can't say what exactly you could or should have done that would have worked.
Ah, so there is no universal test case. There has to be. Let's assume one drive is "bad" what then is the correct way to indicate this through mdadm and start the now "degraded" RAID-1 array?
This all assumes good hardware btw. A buggy disk or controller could actually make a disk appear bad and then later good again or good then lock up etc.. As far as I'm concerned, you could have bad hardware even. You are saying something doesn't work, but you are not showing your deductive process and so the claim is meaningless. Send me your problem disks that you think are impossible to assemble and I bet in a little while I can tell you how to assemble the array as long as there actually is enough there to use. (if you did something stupid and blew away metadata that can't be recreated or inferred, well no hardware raid card will save you from that either.)
All I can say is the systems have an ASUS P4P800-VM mainboard (Intel 865G chip set). They ran Fedora for 2 years and then I replaced the hard drives and installed openSUSE on md RAID. Two systems physically 20 miles apart the same thing happens to. The hard drive manufacturer long test "passed" on all four drives. The fact that I can mount each of the partitions that made up /dev/md0 and the md5 of all important files on the system on both partitions (and just the fact that I can read the data off the individual partitions) further shows that is is not a hardware issue. I still have the drives, if I am wrong I have no problem admitting it.
Well just for starters a couple high level (as in far removed from the nitty gritty) hints while I go look up your original post Carlos referred to. One thing I've seen, which I don't think is your problem but shows the kind of thing that happens, a disk will drop out of the array and reappear instantly as a higher drive letter than the system really has. maybe there is sda, sdb, sdc, sdd, and then sdb disappears and suddenly an sde appears. It's possible via mdadm to add sde or sde3 etc to the array and tell it to start rebuilding. At the next reboot the 4 drive will be sd[a-d] again because the problem was the kernel driver momentarily losing connection to the drive, or thinking it did. There are sata driver options that can be set via udev to affect what happens when a drive goes away that might stop that renaming. "force" does force assemble (or force whatever action), but perhaps the context and scope is unclear. There is no magic force fix everything. There are force options for the various individual actions, and in each case the scope of the force option is limited to that type of action. If the raid formatting is ok, and merely the data is assumed to be inconsisten due to mismatched event counters, then force will force assemble. If the raid metada is scrambled, such as by a motherboard bios with it's fakeraid option turned on and placing some data of it's own on the disk, then there is no way to force assemble or force run until you first make the raid formatting good again. This may mean deleting the array and/or just (re)creating it with all the same exact settings as original, and --force, right over top of the disks. This won't touch the data itself, but will create all new raid formatting in between and around the data (or wherever it is that mdraid stores it's data) The new raid formatting will be consistent and so it can be assembled and run. It may or may not yet be mountable at that point. You just have a disk at that point. Maybe it has a good filesystem or maybe a scrambled one. If a disk is physically missing, In other words, you can only fix one thing at a time, and all lower things must be fixed, or made to appear fixed or treated as fixed, before the highest level operation, run, can be done. That top level may or may not need --force itself too. force run is just to force it to run in situations where it is physically possible to run, merely it defaults not to for safety, such as a disk missing from an array.
And I'm not even slightly an mdadm guru. I simply spent a good solid weekend and then several smaller incidents experimenting. I would say it's still black art to me. But even at this level I already have actually performed actions you claim are impossible, and have seen symptoms like you decribe above, except I looked at the problem longer than 13 seconds and discovered the problem was not as it seemed and that it was prfectly solvable in every case so far. That includes those 10 boxes I was talking about. The disks kept failing randomly, but it was always possible to rebuild and rejoin them. It sometimes took some poking and insight. I'm not saying it was always obvious what to do or why. Just that it always turned out to be do-able even when it looked impossible based on the first and most obvious commands.
So far my assertion stands. You should not expect mdraid to work for you, but that has no bearing on other people or on mdraid itself. You are merely saying that because you don't know how to fly helicopters, that helicopters are garbage.
Prove me wrong. Because noone has been able to provide the proper commands to rebuild an array. There is no documentation on how to do it, the man page is vague and the commands dont work correctly.
The commands to rebuild an array depends on whats wrong with the array and what you may or may not know about the array that the software can not know. I have had disks with physically bad sectors and more than one disk bad in a raid5 array, and yet lost nothing, because I knew something the software couldn't know. Or I guess it could know come to think of it, but anyways, I knew that although the disks were technically inconsistent, I knew the data I cared about was actually ok, and I knew that one of the disks was a dd_rescue clone of a physically bad disk and so the new disk isn't physically bad, but it does have some chunks missing. So I knew it was ok to create an array from scratch, using the exact same settings, right over top of the disks with data on them. _raid5_ data. I didn't know what particular commands were ok before I tried it of course, but I knew no data was out in the scrambled part of the disk. The man page IS vague. This is why I keep saying that mdadm is not mastered just by reading man mdadm. It may never be practical to write a fully comprehensive documentation of mdraid in man mdadm either. It may be worth a small book. mdraid isn't a great idea until after you've experimented some and figured out what the different buttons do by having pressed them yourself on a system where it was ok to do things that might erase everything. Some actions sound like the absolute last thing you want to do by reading the man page, and yet, as long as you know whats going on it's not only ok, it;s the only way to fix the problem. Like deleting and recreating a raid0 or raid5 array. And it's not wise to rely on something that you don't know how to manipulate. Thats basically true of all unix since day one and still today, not just linux's mdraid. Did you _really_ know that fsck wasn't going to erase everything the first time you ran it? I did not know that "cp" wasn't going to do something bad the first time I ran it. (and of course, in fact cp can be about as dangerous as anything else) <ridiculous wandering aside>Maybe that's an interesting aspect of the difference between learning something on your own vs having a teacher guide you. If I were in a class and the teacher said "create a new file named too.txt that's a copy of me.txt like this: cp me.txt too.txt" Sure, in that case I would have not worried about anything even the very first time I ran cp. I probably would not have been -thinking- about anything let alone worrying. Hm... One of my favorite people's favorite phrases is "learn by destroying". That really is the best way. When you do something your very self and it blows up, and then when you do something else yourself and it is harmless, that is the surest knowledge in your head. You have not a trace of stress just sticking your hand right in the middle of the big scary gear works that everyone else doesn't dare touch, because you already did this at home lots of times and it's not a mystery. (or at least the handle you are grabbing isn't even if the machine itself still is. that's ok.) Probably the esiest way to test things out is with a few usb thumb drives. You could use ramdisks, but then the kernel may protect you from performing exactly the exteriments you need to do. But it can't stop you from yanking out a thumb drive or plugging it back in. In your case, plug the original disks in and boot up a suse installer or a knoppix or ubuntu or suse live cd and run lsmod |less, dmesg |less, cat /proc/mdstat, mdadm -QE /dev/sd[a-z]3 |less, and mdadm -QD /dev/md[0-9] |less Just to see what's there for starters. -- Brian K. White brian@aljex.com http://www.myspace.com/KEYofR +++++[>+++[>+++++>+++++++<<-]<-]>>+.>.+++++.+++++++.-.[>+<---]>++. filePro BBx Linux SCO FreeBSD #callahans Satriani Filk! -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On 2008-09-22T18:51:13, Andrew Joakimsen
Ah, so there is no universal test case. There has to be. Let's assume one drive is "bad" what then is the correct way to indicate this through mdadm and start the now "degraded" RAID-1 array?
mdadm -A -R And then add a new drive, rebuild happens automatically. Regards, Lars -- Teamlead Kernel, SuSE Labs, Research and Development SUSE LINUX Products GmbH, GF: Markus Rex, HRB 16746 (AG Nürnberg) "Experience is the name everyone gives to their mistakes." -- Oscar Wilde -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On 2008-09-22T16:05:28, Andrew Joakimsen
So yes mdadm is crap and should never be used. If you need to do mdadm /dev/md0 --fail /dev/sdb3 and it say sdb3 do not exist there is a serious issues of the developers piping their toilets into their code.
I can see how an attitude like that, including intentionally screwing up the openSUSE wiki, might make developers less inclined to work with you. But in fact, Neil is very responsive. Is there a bugzilla associated with this apparent defect? Have you tried the linux-raid mailing list too, possibly in a more constructive tone? mdadm has always worked fine for me. I had some hw issues with my arrays in the last 1,5 decade, but md raid always recovered fine. Regards, Lars -- Teamlead Kernel, SuSE Labs, Research and Development SUSE LINUX Products GmbH, GF: Markus Rex, HRB 16746 (AG Nürnberg) "Experience is the name everyone gives to their mistakes." -- Oscar Wilde -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
participants (13)
-
Andrew Joakimsen
-
Brian K. White
-
Carlos E. R.
-
Carlos E. R.
-
David C. Rankin
-
Felix Miata
-
Greg Freemyer
-
Hylton Conacher (ZR1HPC)
-
Lars Marowsky-Bree
-
Neil
-
Per Jessen
-
Randall R Schulz
-
Rui Santos