On Sun, 2010-08-15 at 15:33 -0700, Linda Walsh wrote:
Greg Freemyer wrote:
On Sun, Aug 15, 2010 at 8:30 AM, Matthias G. Eckermann <mge@novell.com> wrote:
On 2010-08-14 T 20:41 -0400 Greg Freemyer wrote:
On Sat, Aug 14, 2010 at 7:45 PM, Linda Walsh wrote:
Forward this from the linux-xfs list, in hoping that decision-makers will consider XFS as the default suse OS install choice. In the past (2.4.x kernel) XFS had significant data loss issues in the presence of unexpected shutdowns. I am afraid, this sounds a bit vague: please check, if in those cases the storage backends had been configured correctly: every journaling filesystem will suffer from data corruption, if there are any caches (such as disk write cache) inbetween the write and the disk. It was very well known and accepted in the 2002-2005 timeframe that xfs on linux often would replace the content of files with nulls. ANd I mean exclusively nulls.
That is because that area of the file was RECORDED in the journal, as having been written to. The plug was pulled, before the data could be written. It's a violation of security policy and practice to NOT zero the data in such a case. XFS was also charted to handle a basic requirement of security, in that if a process has written to an object, the old data is guaranteed to be zeroed or reinitialized.
+1 This was a bogus argument against XFS. If a file's contents are corrupted then it is a corrupt file; other behavior is equivalent to wishing-and-a-hoping that the file is OK. A
Thus many people that had the poor luck to use xfs for their home directory had the kde config files turned into null filled files due to a power outage.
That's because they overwrote them, then pulled the plug before the new data could be written. In my experience, it's very rare,
Ditto. We tested and tested and never managed to induce the described behavior [on servers].
It may be inconvenient, but unplanned shutdowns many seconds of unflushed data in them is a bad situation no matter what. XFS didn't cause those files to be zeroed. Their programs wrote to those areas. That those areas no longer contained useful data was recorded in the journal. It's not until the data is flushed that the data in those files would be valid again. So really -- you may not like it, but zeroing it is the safest thing to do (besides being a security requirement).
Actually, as an admin I always wanted a feature that caused the filesystem to just *erase* files known to be an inconsistent state. One can always restore the file from backup - it is just nice to know up-front that something is amiss rather than whenever-the-file-gets-pinged. -- Adam Tauno Williams <awilliam@whitemice.org> LPIC-1, Novell CLA <http://www.whitemiceconsulting.com> OpenGroupware, Cyrus IMAPd, Postfix, OpenLDAP, Samba -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org