Mailinglist Archive: opensuse (1695 mails)

< Previous Next >
Re: [opensuse] 10.2 no RAID to 11.0 RAID 1

----- Original Message ----- From: "Andrew Joakimsen" <joakimsen@xxxxxxxxx>
To: "Brian K. White" <brian@xxxxxxxxx>
Cc: <opensuse@xxxxxxxxxxxx>
Sent: Monday, September 22, 2008 6:51 PM
Subject: Re: [opensuse] 10.2 no RAID to 11.0 RAID 1


On Mon, Sep 22, 2008 at 5:59 PM, Brian K. White <brian@xxxxxxxxx> wrote:

----- Original Message -----
From: "Andrew Joakimsen" <joakimsen@xxxxxxxxx>
To: "Brian K. White" <brian@xxxxxxxxx>
Cc: <opensuse@xxxxxxxxxxxx>
Sent: Monday, September 22, 2008 4:05 PM
Subject: Re: [opensuse] 10.2 no RAID to 11.0 RAID 1


On Mon, Sep 22, 2008 at 3:27 PM, Brian K. White <brian@xxxxxxxxx> wrote:

It's perfectly possible to force a rebuild.
In fact, you can force rebuilds in mdadm in situations where no firmware raid will ever let you.
If you don't know how, that's a you problem not an mdadm problem.

I know how and issue the right command. It say /dev/sdb3 or whatnot
DOES NOT EXSIST.

But if you do ll /dev/sdb3 or even cat /dev/sdb3 the device is obviously there.

So yes mdadm is crap and should never be used. If you need to do mdadm
/dev/md0 --fail /dev/sdb3 and it say sdb3 do not exist there is a
serious issues of the developers piping their toilets into their code.

Wrong. (Unless you can supply enough exact commands and responses and other observations to prove your diagnostic process and deductions aren't full of holes. You have not done so above.)

I still have the drives. I am still looking for real instructions on
how to use mdadm. One of the "step by step" guides even show one of
the errors as normal output! So I figure what the hell let me continue
anyways and of course it did not work.

I have seen a few different things that each were different problems, yet each could have been described roughly as above, and yet in each case the drive was not actually unavailable and all desired operations were able to be performed somehow. The exact steps varied in each case because the exact problem varied in each case. I don't know which of the exact problems you actually had, because as I said, there was just in my own little experience more than one way to get something roughly like that, so I can't say what exactly you could or should have done that would have worked.

Ah, so there is no universal test case. There has to be. Let's assume
one drive is "bad" what then is the correct way to indicate this
through mdadm and start the now "degraded" RAID-1 array?

This all assumes good hardware btw. A buggy disk or controller could actually make a disk appear bad and then later good again or good then lock up etc..
As far as I'm concerned, you could have bad hardware even. You are saying something doesn't work, but you are not showing your deductive process and so the claim is meaningless. Send me your problem disks that you think are impossible to assemble and I bet in a little while I can tell you how to assemble the array as long as there actually is enough there to use. (if you did something stupid and blew away metadata that can't be recreated or inferred, well no hardware raid card will save you from that either.)

All I can say is the systems have an ASUS P4P800-VM mainboard (Intel
865G chip set). They ran Fedora for 2 years and then I replaced the
hard drives and installed openSUSE on md RAID. Two systems physically
20 miles apart the same thing happens to. The hard drive manufacturer
long test "passed" on all four drives. The fact that I can mount each
of the partitions that made up /dev/md0 and the md5 of all important
files on the system on both partitions (and just the fact that I can
read the data off the individual partitions) further shows that is is
not a hardware issue. I still have the drives, if I am wrong I have
no problem admitting it.

Well just for starters a couple high level (as in far removed from the nitty gritty) hints while I go look up your original post Carlos referred to.

One thing I've seen, which I don't think is your problem but shows the kind of thing that happens, a disk will drop out of the array and reappear instantly as a higher drive letter than the system really has. maybe there is sda, sdb, sdc, sdd, and then sdb disappears and suddenly an sde appears. It's possible via mdadm to add sde or sde3 etc to the array and tell it to start rebuilding. At the next reboot the 4 drive will be sd[a-d] again because the problem was the kernel driver momentarily losing connection to the drive, or thinking it did. There are sata driver options that can be set via udev to affect what happens when a drive goes away that might stop that renaming.

"force" does force assemble (or force whatever action), but perhaps the context and scope is unclear. There is no magic force fix everything. There are force options for the various individual actions, and in each case the scope of the force option is limited to that type of action. If the raid formatting is ok, and merely the data is assumed to be inconsisten due to mismatched event counters, then force will force assemble.
If the raid metada is scrambled, such as by a motherboard bios with it's fakeraid option turned on and placing some data of it's own on the disk, then there is no way to force assemble or force run until you first make the raid formatting good again. This may mean deleting the array and/or just (re)creating it with all the same exact settings as original, and --force, right over top of the disks. This won't touch the data itself, but will create all new raid formatting in between and around the data (or wherever it is that mdraid stores it's data) The new raid formatting will be consistent and so it can be assembled and run. It may or may not yet be mountable at that point. You just have a disk at that point. Maybe it has a good filesystem or maybe a scrambled one. If a disk is physically missing, In other words, you can only fix one thing at a time, and all lower things must be fixed, or made to appear fixed or treated as fixed, before the highest level operation, run, can be done. That top level may or may not need --force itself too. force run is just to force it to run in situations where it is physically possible to run, merely it defaults not to for safety, such as a disk missing from an array.

And I'm not even slightly an mdadm guru. I simply spent a good solid weekend and then several smaller incidents experimenting. I would say it's still black art to me. But even at this level I already have actually performed actions you claim are impossible, and have seen symptoms like you decribe above, except I looked at the problem longer than 13 seconds and discovered the problem was not as it seemed and that it was prfectly solvable in every case so far. That includes those 10 boxes I was talking about. The disks kept failing randomly, but it was always possible to rebuild and rejoin them. It sometimes took some poking and insight. I'm not saying it was always obvious what to do or why. Just that it always turned out to be do-able even when it looked impossible based on the first and most obvious commands.

So far my assertion stands. You should not expect mdraid to work for you, but that has no bearing on other people or on mdraid itself.
You are merely saying that because you don't know how to fly helicopters, that helicopters are garbage.

Prove me wrong. Because noone has been able to provide the proper
commands to rebuild an array. There is no documentation on how to do
it, the man page is vague and the commands dont work correctly.


The commands to rebuild an array depends on whats wrong with the array and what you may or may not know about the array that the software can not know.
I have had disks with physically bad sectors and more than one disk bad in a raid5 array, and yet lost nothing, because I knew something the software couldn't know. Or I guess it could know come to think of it, but anyways, I knew that although the disks were technically inconsistent, I knew the data I cared about was actually ok, and I knew that one of the disks was a dd_rescue clone of a physically bad disk and so the new disk isn't physically bad, but it does have some chunks missing. So I knew it was ok to create an array from scratch, using the exact same settings, right over top of the disks with data on them. _raid5_ data. I didn't know what particular commands were ok before I tried it of course, but I knew no data was out in the scrambled part of the disk.

The man page IS vague. This is why I keep saying that mdadm is not mastered just by reading man mdadm. It may never be practical to write a fully comprehensive documentation of mdraid in man mdadm either. It may be worth a small book.

mdraid isn't a great idea until after you've experimented some and figured out what the different buttons do by having pressed them yourself on a system where it was ok to do things that might erase everything.
Some actions sound like the absolute last thing you want to do by reading the man page, and yet, as long as you know whats going on it's not only ok, it;s the only way to fix the problem. Like deleting and recreating a raid0 or raid5 array.
And it's not wise to rely on something that you don't know how to manipulate.
Thats basically true of all unix since day one and still today, not just linux's mdraid.
Did you _really_ know that fsck wasn't going to erase everything the first time you ran it?
I did not know that "cp" wasn't going to do something bad the first time I ran it. (and of course, in fact cp can be about as dangerous as anything else) <ridiculous wandering aside>Maybe that's an interesting aspect of the difference between learning something on your own vs having a teacher guide you. If I were in a class and the teacher said "create a new file named too.txt that's a copy of me.txt like this: cp me.txt too.txt" Sure, in that case I would have not worried about anything even the very first time I ran cp. I probably would not have been -thinking- about anything let alone worrying. Hm...</ridiculous wandering aside>
One of my favorite people's favorite phrases is "learn by destroying".
That really is the best way. When you do something your very self and it blows up, and then when you do something else yourself and it is harmless, that is the surest knowledge in your head. You have not a trace of stress just sticking your hand right in the middle of the big scary gear works that everyone else doesn't dare touch, because you already did this at home lots of times and it's not a mystery. (or at least the handle you are grabbing isn't even if the machine itself still is. that's ok.)

Probably the esiest way to test things out is with a few usb thumb drives. You could use ramdisks, but then the kernel may protect you from performing exactly the exteriments you need to do. But it can't stop you from yanking out a thumb drive or plugging it back in.

In your case, plug the original disks in and boot up a suse installer or a knoppix or ubuntu or suse live cd and run lsmod |less, dmesg |less, cat /proc/mdstat, mdadm -QE /dev/sd[a-z]3 |less, and mdadm -QD /dev/md[0-9] |less
Just to see what's there for starters.

--
Brian K. White brian@xxxxxxxxx http://www.myspace.com/KEYofR
+++++[>+++[>+++++>+++++++<<-]<-]>>+.>.+++++.+++++++.-.[>+<---]>++.
filePro BBx Linux SCO FreeBSD #callahans Satriani Filk!

--
To unsubscribe, e-mail: opensuse+unsubscribe@xxxxxxxxxxxx
For additional commands, e-mail: opensuse+help@xxxxxxxxxxxx

< Previous Next >