[opensuse] RE: Tips on Avoiding Any Hard Drive RAID Not Mounting on Startup and Failing to Multiple Error Levels
Like many, I too have had my share of RAID volumes being created and mounting perfectly after creation only to findthat the Volume will not mount at boot and failing to any number of error levels. Often the user is presented with a shell repair mode and running repairs like fsck could be the worst thing you can do. Often the problem is the superblock creation time...here's the kicker... If the RAID was created whilst a system is up and running, that system date/time is taken from your prevailing clock. Most of us set up a NTP Network Service to auto sync with time servers all over the world. IF, at boot, your PC's CMOS date/time is behind the date/time of the PC's session clock, you will be flooded with huge numbers of reasons why the volume wont mount... The real reason is that the Superblock will be written to the date/time of your PC session clock and now at boot, with your CMOS clock retarded! You will be flooded with various error levels as to why the volume wont mount and some reasons are really fancifully and as a consequence to an Invalid Superblock The number of error levels that could possibly appear at boot as to why the volume wont mount could be just about any reason. Before you start entertaining using any repair mode, or any maintenance mode where / is mounted rw, just make sure your CMOS clock is accurate. This might just fix your never ending issues with creating ANY RAID Volume that wont mount at boot Hope this helps a few guys/girls -Scott - .AU -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
IDS000 wrote:
Like many, I too have had my share of RAID volumes being created and mounting perfectly after creation only to findthat the Volume will not mount at boot and failing to any number of error levels.
Often the user is presented with a shell repair mode and running repairs like fsck could be the worst thing you can do. Often the problem is the superblock creation time...here's the kicker...
If the RAID was created whilst a system is up and running, that system date/time is taken from your prevailing clock. Most of us set up a NTP Network Service to auto sync with time servers all over the world.
IF, at boot, your PC's CMOS date/time is behind the date/time of the PC's session clock, you will be flooded with huge numbers of reasons why the volume wont mount...
The real reason is that the Superblock will be written to the date/time of your PC session clock and now at boot, with your CMOS clock retarded! You will be flooded with various error levels as to why the volume wont mount and some reasons are really fancifully and as a consequence to an Invalid Superblock
The number of error levels that could possibly appear at boot as to why the volume wont mount could be just about any reason. Before you start entertaining using any repair mode, or any maintenance mode where / is mounted rw, just make sure your CMOS clock is accurate.
This might just fix your never ending issues with creating ANY RAID Volume that wont mount at boot
Hope this helps a few guys/girls -Scott - .AU
Why would this have an effect on RAID-ed filesystems but not on non-RAID filesystems? They both have superblocks, and as far as I know, there is no additional "RAID superblock", just the same old superblock in the filesystem which we have been living with since 1970's versions of Unix. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Dirk Gently wrote:
Why would this have an effect on RAID-ed filesystems but not on non-RAID filesystems? They both have superblocks, and as far as I know, there is no additional "RAID superblock", just the same old superblock in the filesystem which we have been living with since 1970's versions of Unix.
I'm not sure I understand you. mdadm certainly has superblocks: https://raid.wiki.kernel.org/articles/r/a/i/RAID_superblock_formats_fd05.htm... -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
IDS000 wrote:
Like many, I too have had my share of RAID volumes being created and mounting perfectly after creation only to findthat the Volume will not mount at boot and failing to any number of error levels.
--- Your information may be correct, dunno, but I a wimp on these things. I don't want to boot to something my OS doesn't see as a single volume on boot. Too many things can go wrong in the OS... If it's in HW, or a BIOS, then not as likely -- (RAIDS can always go bad, that's life), but you won't have probs like you are mentioning if you boot to a single volume then activate a RAID for the rest of your system. One unknown-cause RAID5 death (no disks were bad), was enough to put me off Software RAID for some time.... things are better now (I think), but given the problems with trying to force HDMI monitors into 1024x768 mode, that some modern OS's seem to do, I may be guilty of wishful thinking (again)...
Often the user is presented with a shell repair mode and running repairs like fsck could be the worst thing you can do. Often the problem is the superblock creation time...here's the kicker...
---- Question is -- why are you running fsck? if you are running with a RAID that makes it probable that you have a larger file system -- XFS was designed for such -- and doesn't need fsck. Then you've narrowed down the problem to madm or such. But to continue -- you still need to have...well, boot from rescue CD works for me! ;-) (HEY WHOEVER -- DON'T BREAK THAT!!!!) -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
participants (5)
-
Brian K. White
-
Dave Howorth
-
Dirk Gently
-
IDS000
-
Linda Walsh