Well it turns out that md0 is in state:dirty,no-errors for about 19 days!! Any good ideas anyone?? Is there something that I can do to change the "dirty" state?? Chris
----- Original Message ----- From: "Anders Johansson" <andjoh@rydsbo.net> To: <suse-linux-e@suse.com> Sent: Friday, October 08, 2004 1:48 PM Subject: Re: [SLE] Raid-1 Question
On Friday, 8 October 2004 12.18, Chris Roubekas wrote:
Update Time : Sat Sep 18 19:04:53 2004 State : dirty, no-errors Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0
Number Major Minor RaidDevice State 0 3 65 0 active sync /dev/hdb1 1 22 1 1 active sync /dev/hdc1 UUID : 03f9846c:72f50436:af213782:c8e28b96
Can someone give me some insight on why the update time is about 26 days ago??
I believe that's the time your raid superblock was last updated.
Does that mean that my md0 is not being updated instantaneously?
How long did you wait after the copy? Unless you have the file system mounted with the "sync" option, it will be buffered for performance reasons. Try running "sync" in a shell and then looking in the directories again
-- Check the headers for your unsubscription address For additional commands send e-mail to suse-linux-e-help@suse.com Also check the archives at http://lists.suse.com Please read the FAQs: suse-linux-e-faq@suse.com
Chris Roubekas wrote:
Well it turns out that md0 is in state:dirty,no-errors for about 19 days!!
Any good ideas anyone?? Is there something that I can do to change the "dirty" state??
Other than reboot, I don't know. It would need to be fscked to be marked as clean AFAIK, but with Linux uptimes, I suspect this is just saying at the next reboot it will be checked. IIUC, there is both a max mount count and max days for a partition to be 'clean'. If you reboot often, or otherwise mount and unmount often, your drives will be fscked often. Likewise, if you seldom reboot, your drive may need checked after a certain amount of uptime (due to power glitches, etc.), which IIUC is accomplished by marking the partition as dirty allowing it to be sure to be checked next opportunity, such as a reboot. I suppose you could remount it ro to fsck, but since there are so many programs and processes running which need to write to the drives, it seems safer to me in the long run to just reboot. If I'm wrong, I would also like to know a better way. HTH. -- Joe Morris New Tribes Mission Email Address: Joe_Morris@ntm.org Registered Linux user 231871
participants (2)
-
Chris Roubekas
-
Joe Morris (NTM)