On 07/13/2020 04:21 PM, Lew Wolfgang wrote:
On 07/13/2020 12:52 PM, Lew Wolfgang wrote:
On 07/13/2020 12:32 PM, Dave Howorth wrote:
On Mon, 13 Jul 2020 08:15:03 -0700 Lew Wolfgang
wrote: Dave Howorth wrote:
Assuming /dev/sdc1 is part of the RAID, why are you trying to mount just it? I read /dev/sdc and /dev/sdd to be volumes "backed" by hardware RAID, individual drives not visible. Yes, the RAID controller assembles, in this case 36 14-TB SAS disks, into two volumes: /dev/sdc and /dev/sdd. The volumes are each GPT labeled and partitions created as /dev/sdc1 and /dev/sdd1. mkfs.xfs is then used to create the two filesystems. mkfs.ext4 worked okay, which leads me to think that mkfs.xfs or something in the XFS
On 07/13/2020 03:58 AM, Per Jessen wrote: libraries is broken.
The system is remote (I'm teleworking), but I'll go in to day and try a couple of things, like booting a 15.1 rescue ISO to see if can mount the partitions. If it can't, I'll try the 15.1 mkfs.xfs and see what happens.
Thanks for the explanation.
I'm still a bit confused though. Your bug report is about installation but what you're discussing appears to be a problem creating an xfs filesystem? But you haven't shown any details of that creation. Neither any output nor any arguments supplied to it.
I kept it short and sweet for the bug report, a failed installation is something you can hang your hat on. I instructed the installation process to create the two large filesystems. The partitioner complained that the "structure needs cleaning". I hit the "ignore" button, but the first boot failed with the mount problem.
After the install failed to boot, I commented out the two fstab entries and booted without mounting the RAID partitions. I then tried to build new filesystems using YaST's partitioner, gparted, and mkfs.xfs. In all cases the failure appeared when trying to mount the just-created filesystems.
Mount returns:
mount -t XFS /dev/sdc1 /mnt mount: /export/data: mount(2) system call failed: structure needs cleaning.
Then, xfs_repair returns:
Phase 1 - Find and Verify Superblock... Bad Primary Superblock - bad stripe width in Superblock! Attempting to find secondary superblock... ........ (I didn't wait around, too many dots)
This morning, Arvin Schnell (Bugzilla) noticed this in /var/log/messages:
[ 1361.758237] XFS (sdc1): SB stripe unit sanity check failed [ 1361.758315] XFS (sdc1): Metadata corruption detected at xfs_sb_read_verify+0xfe/0x170 [xfs], xfs_sb block 0xffffffffffffffff [ 1361.758315] XFS (sdc1): Unmount and run xfs_repair [ 1361.758316] XFS (sdc1): First 128 bytes of corrupted metadata buffer:
Same entries for sdd1.
Note the 0xffffffffffffffff, an overflow somewhere?
Again, ext4 built without issue.
I'm leaving right now to try some additional things. Further news when I return.
I'm back.
The xfs_repair that I started yesterday finished, saying:
"Sorry, could not find valid secondary superblock"
Today I booted the 15.1 rescue system and determined that mount works! It reported:
4096 byte physical blocks 574218043392 blocks for sda1 (drive lettering changed) 273437163520 blocks for sdb1
Back running 15.2, fdisk reports the same block counts as 15.1.
Note that 15.1 was able to mount the XFS partitions created by 15.2's mks.xis.
This implies to me that the problem is probably in the 15.2 XIS kernel module?
s/xis/xfs, of course. Regards, Lew -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org