On Sat, 29 Oct 2016 15:11:31 -0700 John Andersen <jsamyth@gmail.com> wrote:
On 10/29/2016 02:44 PM, Dave Howorth wrote:
What Linda wrote seemed quite likely. See below...
No, it seemed wrong on many accounts:
Ah, I obviously missed all the facts below that you must have posted in your first message, but since you already completely understand, I guess you don't need people trying to be helpful.
1) the system was never mounted with inode64. It was set up and rock solid since 6 months after 13.2 release date.
2) this was a known deficiency in xfs_repair 3.2.1 long since fixed but never for OS 13.2.
3) there were exactly two inodes that were trashed somehow, not hundreds.
3) Restore from a backup would not have fixed anything, as the two corrupted inodes would still be corrupted, could not be rewritten or removed, may not have been un-linkable, and those files were browser cache detritus anyway. These never caused errors until a filesystem backup tried to read them.
4) Last known access to these files was in June, (judging by dates surrounding them and their appearance in prior backups but not on subsequent backups).
5) There appears NOT to have been any issue relating to LUKS encryption here, other than needing to run the xfs_repair against the /dev/mapper partition, which is the only listing in fstab for this partition. (The underlying hardware partition (/dev/sda4) is not shown in fstab, and I was well aware of not wanting to address that anyway.
I tried to reply to her post, but did so on my phone while riding transit and lost the entire thing.
On Fri, 28 Oct 2016 23:20:50 -0700 Linda Walsh <suse@tlinx.org> wrote:
John Andersen wrote:
If I list the entire directory I get this:
poulsbo:/home # l ./jsa/.cache/google-chrome/Default/old_Cache_000/ ls: cannot access ./jsa/.cache/google-chrome/Default/old_Cache_000/f_0259ff: Structure needs cleaning total 70772 drwx------ 2 jsa users 94208 May 31 12:38 ./ drwx------ 5 jsa users 59 Jun 2 14:37 ../ -rw------- 1 jsa users 10231808 May 31 12:31 data_4 ...snip -rw------- 1 jsa users 32938 Feb 14 2016 f_020630 -????????? ? ? ? ? ? f_0259ff -rw------- 1 jsa users 33622 Mar 17 2016 f_025fef
Over on the XFS site Faq them mumble something about question marks in the directory listing suggest you might need inode64, but its only a 500gig drive and this seems unwise at this poing.
you are missing their point. It **looks** like that partition was already used with the "inode64" option. Once you use that option, you can't go back because files are created in the new format. If you later mount the disk without that option, you see output similar to what you are showing above.
It's not that you need to move to inode64, its that you need to try it because it looks like the partition already has been used with that option.
I think it might be worth trying that option. If that doesn't work, then as Linda says, restore from backup.
These also dumped the following into my log:
---- If you have the wrong mount options, and are telling it to ignore the new inodes, there will be much confusion and possibly kernel panics.
So I did as directed, unmounted it and ran xfs_repair on the /dev/mapper/cr_yaddayadda device that represents the encrypted partition /dev/sda4.
AND it is encrypted?
(it is said to be pointless and dangerous to run xfs_repair on the actual underlying device. Is this true? Its still xfs underneath is it not? ).
---- No. Under the encryption layer, it is garbage. If you ran repair on the underlying partion first, it likely trashed the partition.
After multiple such runs, no error is found by xfs_repair, and the problem is not repaired.
If the disk is corrupt due to bad mount options and/or trying to repair a raw-encrypted partition, it's likely unrepairable.
I'd restore from a backup.
smartctl shows no errors and no sectors having been re-mapped. The drive appears physically fine.
So how do I proceed from here?
---
restore from backup.
-- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org