
I have 3 computers using RAID1. When booted to Leap, all devices equate to: /dev/mdX names, while booted to TW or Slowroll: /dev/md12X All 3 use the same format in /etc/mdadm.conf, e.g.: HOMEHOST <ignore> DEVICE containers partitions ARRAY /dev/mdX metadata=1.0 name=hostname:filesystemlabel UUID=… Each installation on each machine uses one identical file for /etc/mdadm.conf. Leap example: # mdadm -D /dev/md0 /dev/md0: Version : 1.0 Creation Time : Tue Aug 28 00:35:21 2018 Raid Level : raid1 Array Size : 8834944 (8.43 GiB 9.05 GB) Used Dev Size : 8834944 (8.43 GiB 9.05 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Sat Jan 18 04:01:09 2025 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Consistency Policy : bitmap Name : hostname:filesystemlabel UUID : … Events : 1789 Number Major Minor RaidDevice State 3 8 21 0 active sync /dev/sdb5 2 8 5 1 active sync /dev/sda5 # How can I get TW to do as Leap does, and use the names in mdadm.conf? Are --homehost= and/or --prefer= needed to be used somehow with manage to change something recorded on each device as shown by mdadm -D? -- Evolution as taught in public schools is, like religion, based on faith, not based on science. Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata

On 19. 01. 25, 21:00, Felix Miata wrote:
I have 3 computers using RAID1. When booted to Leap, all devices equate to:
/dev/mdX
names, while booted to TW or Slowroll:
/dev/md12X
I assume the kernel dubs them differently. The question is why would you need to care at all? There are no naming guarantees anyway. They are supposed to be assembled per device UUIDs. So are the FSs/LVMs on top of them, right? -- js suse labs

On Mon, Jan 20, 2025 at 10:53 AM Jiri Slaby <jslaby@suse.cz> wrote:
On 19. 01. 25, 21:00, Felix Miata wrote:
I have 3 computers using RAID1. When booted to Leap, all devices equate to:
/dev/mdX
names, while booted to TW or Slowroll:
/dev/md12X
I assume the kernel dubs them differently. The question is why would you need to care at all? There are no naming guarantees anyway.
Software devices are not enumerated, they are created and names are assigned by applications that do it. According to mdadm documentation, it is supposed to assign the names from mdadm.conf (if mdadm.conf has names). So the question is justified. dmesg output would be a good starting point.
They are supposed to be assembled per device UUIDs. So are the FSs/LVMs on top of them, right?
-- js suse labs

Andrei Borzenkov composed on 2025-01-20 11:13 (UTC+0300):
Jiri Slaby wrote:
Felix Miata wrote:
I have 3 computers using RAID1. When booted to Leap, all devices equate to:
/dev/mdX
names, while booted to TW or Slowroll:
/dev/md12X
I assume the kernel dubs them differently. The question is why would you need to care at all? There are no naming guarantees anyway.
Why shouldn't matter. I have reasons, not the least of which are output from df, and seeing lists in a logically sorted order. UUIDs are not made for humans to deal with. I use LABELs. /dev/md12X devices show up in a random order rather than /dev/mdX in the order enumerated in /etc/mdadm.conf.
Software devices are not enumerated, they are created and names are assigned by applications that do it. According to mdadm documentation, it is supposed to assign the names from mdadm.conf (if mdadm.conf has names). So the question is justified.
dmesg output would be a good starting point.
All from each, or is the following enough? 15.6: # dmesg | egrep 'raid|md' | grep -v systemd [ 0.097364] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. [ 7.975813] md/raid1:md4: active with 2 out of 2 mirrors [ 8.008985] md4: detected capacity change from 0 to 434175616 [ 8.025452] md/raid1:md0: active with 2 out of 2 mirrors [ 8.040307] md/raid1:md3: active with 2 out of 2 mirrors [ 8.073254] md0: detected capacity change from 0 to 17669888 [ 8.162935] md3: detected capacity change from 0 to 147455744 [ 8.281787] md/raid1:md2: active with 2 out of 2 mirrors [ 8.309826] md/raid1:md5: active with 2 out of 2 mirrors [ 8.358641] md2: detected capacity change from 0 to 16383872 [ 8.443362] md5: detected capacity change from 0 to 1310719616 [ 8.496604] md/raid1:md1: active with 2 out of 2 mirrors [ 8.792265] EXT4-fs (md4): mounted filesystem 2d7d… r/w with ordered data mode. Quota mode: none. [ 8.955541] md1: detected capacity change from 0 to 8191872 [ 9.926766] EXT4-fs (md3): mounted filesystem 71bc… r/w with ordered data mode. Quota mode: none. [ 10.404943] EXT4-fs (md2): mounted filesystem 7043… r/w with ordered data mode. Quota mode: none. [ 10.490882] EXT4-fs (md5): mounted filesystem 05f1… r/w with ordered data mode. Quota mode: none. [ 10.734198] EXT4-fs (md1): mounted filesystem c3f3… r/w with ordered data mode. Quota mode: none. [378705.317095] md: data-check of RAID array md0 [378706.160997] md: delaying data-check of md1 until md0 has finished (they share one or more physical units) [378706.211840] md: delaying data-check of md2 until md0 has finished (they share one or more physical units) [378706.262224] md: delaying data-check of md4 until md0 has finished (they share one or more physical units) [378706.305024] md: delaying data-check of md5 until md0 has finished (they share one or more physical units) [378706.614193] md: delaying data-check of md3 until md0 has finished (they share one or more physical units) [378769.258496] md: md0: data-check done. [378769.280372] md: delaying data-check of md4 until md5 has finished (they share one or more physical units) [378769.280402] md: delaying data-check of md5 until md1 has finished (they share one or more physical units) [378769.280414] md: delaying data-check of md1 until md3 has finished (they share one or more physical units) [378769.280437] md: delaying data-check of md3 until md1 has finished (they share one or more physical units) [378769.280447] md: delaying data-check of md2 until md1 has finished (they share one or more physical units) [378769.280466] md: delaying data-check of md1 until md2 has finished (they share one or more physical units) [378769.280477] md: data-check of RAID array md2 [378829.688472] md: md2: data-check done. [378829.806698] md: data-check of RAID array md1 [378859.492551] md: md1: data-check done. [378859.505776] md: delaying data-check of md5 until md3 has finished (they share one or more physical units) [378859.505792] md: data-check of RAID array md3 [379396.162381] md: md3: data-check done. [379396.179947] md: data-check of RAID array md5 # dmesg | wc -l 1040 # TW: # dmesg | egrep 'raid|md' | grep -v systemd [ 1.444626] [ T1] simple-framebuffer simple-framebuffer.0: [drm] fb0: simpledrmdrmfb frame buffer device [ 3.466308] [ T713] md/raid1:md121: active with 2 out of 2 mirrors [ 3.466499] [ T629] md/raid1:md120: active with 2 out of 2 mirrors [ 3.467569] [ T723] md/raid1:md126: active with 2 out of 2 mirrors [ 3.528638] [ T713] md121: detected capacity change from 0 to 962559616 [ 3.553376] [ T725] md/raid1:md122: active with 2 out of 2 mirrors [ 3.579895] [ T629] md120: detected capacity change from 0 to 19175168 [ 3.611664] [ T723] md126: detected capacity change from 0 to 8191872 [ 3.689979] [ T725] md122: detected capacity change from 0 to 36863744 [ 3.691278] [ T725] md122: [ 3.697370] [ T726] md/raid1:md124: active with 2 out of 2 mirrors [ 3.795047] [ T726] md124: detected capacity change from 0 to 36863744 [ 3.871317] [ T736] md/raid1:md125: active with 2 out of 2 mirrors [ 4.042822] [ T736] md125: detected capacity change from 0 to 4095872 [ 4.047158] [ T765] md/raid1:md119: active with 2 out of 2 mirrors [ 4.174123] [ T765] md119: detected capacity change from 0 to 307199744 [ 4.192508] [ T767] md/raid1:md118: active with 2 out of 2 mirrors [ 4.392061] [ T767] md118: detected capacity change from 0 to 481279616 [ 4.396833] [ T730] md/raid1:md123: active with 2 out of 2 mirrors [ 4.609041] [ T730] md123: detected capacity change from 0 to 36863744 [ 4.614353] [ T778] md/raid1:md127: active with 2 out of 2 mirrors [ 5.026676] [ T778] md127: detected capacity change from 0 to 36863744 [ 6.627555] [ T834] EXT4-fs (md123): mounted filesystem ab93… ro with ordered data mode. Quota mode: none. [ 10.935280] [ T903] EXT4-fs (md123): re-mounted ab93… r/w. Quota mode: none. [ 15.103416] [ T945] EXT4-fs (md118): mounted filesystem c492… r/w with ordered data mode. Quota mode: none. [ 18.402738] [ T950] EXT4-fs (md126): mounted filesystem 36c2… r/w with ordered data mode. Quota mode: none. [ 18.404912] [ T948] EXT4-fs (md121): mounted filesystem beb5… r/w with ordered data mode. Quota mode: none. [ 22.757118] [ T1014] EXT4-fs (md119): mounted filesystem 2e3d… r/w with ordered data mode. Quota mode: none. [ 22.944684] [ T1018] EXT4-fs (md125): mounted filesystem d39e… r/w with ordered data mode. Quota mode: none. # dmesg | wc -l 1029 #
They are supposed to be assembled per device UUIDs. So are the FSs/LVMs on top of them, right?
Sure, but in df output it's device names shown, not UUIDs or LABELs. 15.6: # df | grep md | sort /dev/md1 4026618 3023832 957731 76% /… /dev/md2 8088832 1829577 6173240 23% /… /dev/md3 73090862 44422256 27927232 62% /… /dev/md4 215282147 172469422 42808629 81% /… /dev/md5 649994936 615992452 33986100 95% /… # TW: /dev/md118 236672420 178422744 55826896 77% /… /dev/md119 150997668 43405448 106039840 30% /… /dev/md121 473525304 461983356 11525564 98% /… /dev/md123 17966368 8603616 8424776 51% /… /dev/md125 1980016 1793072 150084 93% /… /dev/md126 3958112 3115948 784824 80% /… # On both these PCs, each md device's partitions are laid in order, smallest at front, largest at end. TW scrambles their assigned names. Leap's are as configured in mdadm.conf. -- Evolution as taught in public schools is, like religion, based on faith, not based on science. Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata

19.01.2025 23:00, Felix Miata wrote:
I have 3 computers using RAID1. When booted to Leap, all devices equate to:
/dev/mdX
names, while booted to TW or Slowroll:
/dev/md12X
All 3 use the same format in /etc/mdadm.conf, e.g.:
HOMEHOST <ignore> DEVICE containers partitions ARRAY /dev/mdX metadata=1.0 name=hostname:filesystemlabel UUID=…
Either this mdadm.conf is not available when array is assembled or this line does not match the actual array properties.
Each installation on each machine uses one identical file for /etc/mdadm.conf.
Leap example: # mdadm -D /dev/md0 /dev/md0: Version : 1.0 Creation Time : Tue Aug 28 00:35:21 2018 Raid Level : raid1 Array Size : 8834944 (8.43 GiB 9.05 GB) Used Dev Size : 8834944 (8.43 GiB 9.05 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Sat Jan 18 04:01:09 2025 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0
Consistency Policy : bitmap
Name : hostname:filesystemlabel UUID : … Events : 1789
Number Major Minor RaidDevice State 3 8 21 0 active sync /dev/sdb5 2 8 5 1 active sync /dev/sda5 # How can I get TW to do as Leap does, and use the names in mdadm.conf? Are --homehost= and/or --prefer= needed to be used somehow with manage to change something recorded on each device as shown by mdadm -D?

Andrei Borzenkov composed on 2025-01-20 21:42 (UTC+0300):
Felix Miata wrote:
I have 3 computers using RAID1. When booted to Leap, all devices equate to:
/dev/mdX
names, while booted to TW or Slowroll:
/dev/md12X
All 3 use the same format in /etc/mdadm.conf, e.g.:
HOMEHOST <ignore> DEVICE containers partitions ARRAY /dev/mdX metadata=1.0 name=hostname:filesystemlabel UUID=…
Either this mdadm.conf is not available when array is assembled or this line does not match the actual array properties.
On both PCs, the RAIDs were built before before acquisition of the motherboards that currently use them, so the hostnames then differed. On this 15.6, the RAID devices are only for data, so unrelated to booting. (5 openSUSE installations on this NVME using the exact same RAID devices) # grep RETT /etc/os-release PRETTY_NAME="openSUSE Leap 15.6" # hostname 00srv # mount | egrep 'boot|home' /dev/nvme0n1p3 on /disks/boot type ext2 (rw,relatime,lazytime) /dev/md3 on /home type ext4 (rw,relatime,lazytime) # df / /home Filesystem 1K-blocks Used Available Use% Mounted on /dev/nvme0n1p8 17966480 4528928 12499572 27% / /dev/md3 73090862 44443264 27906224 62% /home # lsinitrd /boot/initrd | grep mdadm | egrep -v '@.' # grep hom /etc/mdadm.conf ARRAY /dev/md3 metadata=1.0 name=gb250:3hom UUID=6594… # mdadm -D /dev/md3 | grep ame Name : msi85:3hom # TW isn't booting from any RAID device either: (4 openSUSE installations sharing the same /boot/ filesystem and RAID devices) # grep RETT /etc/os-release PRETTY_NAME="openSUSE Tumbleweed" # hostname msi85 # mount | egrep 'boot|home' /dev/md121 on /home type ext4 (rw,noatime) /dev/sda3 on /boot type ext4 (rw,noatime) # df / /home Filesystem 1K-blocks Used Available Use% Mounted on /dev/md118 17966368 8567368 8461024 51% / /dev/md121 150997668 43405448 106039840 30% /home # lsinitrd /boot/initrd | grep mdadm | egrep -v '@.' -rw-r--r-- 1 root root 924 Dec 23 19:59 etc/mdadm.conf -rwxr-xr-x 1 root root 636976 Sep 18 11:43 usr/sbin/mdadm # grep hom /etc/mdadm.conf ARRAY /dev/md7 metadata=1.0 name=msi85:md-home UUID=99e6… # mdadm -D /dev/md121 | grep ame Name : srv10:md-home # Logically this seems to me to indicate TW would be assembling earlier in boot process here than Leap.
Each installation on each machine uses one identical file for /etc/mdadm.conf.
Leap example: # mdadm -D /dev/md0 /dev/md0: Version : 1.0 Creation Time : Tue Aug 28 00:35:21 2018 Raid Level : raid1 Array Size : 8834944 (8.43 GiB 9.05 GB) Used Dev Size : 8834944 (8.43 GiB 9.05 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Sat Jan 18 04:01:09 2025 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0
Consistency Policy : bitmap
Name : hostname:filesystemlabel UUID : … Events : 1789
Number Major Minor RaidDevice State 3 8 21 0 active sync /dev/sdb5 2 8 5 1 active sync /dev/sda5 # How can I get TW to do as Leap does, and use the names in mdadm.conf? Are --homehost= and/or --prefer= needed to be used somehow with manage to change something recorded on each device as shown by mdadm -D?
The question remains: if property matching is the issue, which need matching, and how? And, why is TW's the one that differs from mdadm.conf? -- Evolution as taught in public schools is, like religion, based on faith, not based on science.
Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata
participants (3)
-
Andrei Borzenkov
-
Felix Miata
-
Jiri Slaby