Bug ID 1168914
Summary RAID module uses metadata=1.0 and therefore can't use journal feature for RAID5
Classification openSUSE
Product openSUSE Tumbleweed
Version Current
Hardware Other
OS Other
Status NEW
Severity Normal
Priority P5 - None
Component YaST2
Assignee yast2-maintainers@suse.de
Reporter psikodad@gmx.de
QA Contact jsrain@suse.com
Found By ---
Blocker ---

Summary:
yast partitioner creates raid devices always with "metadata=1.0" and therefore
it's not possible to add a journal to a RAID5 device. Workaround: Create the
RAID5 manually, because then the default metadata version 1.2 is used, which
supports journal.


When I create a new RAID5 with YaST2 partitioner (storage-ng), then the raid
version (metadata) is always 1.0.
In fact it's hard coded in file libstorage-ng-master/storage/Devices/MdImpl.cc:
    string cmd_line = MDADM_BIN " --create " + quote(get_name()) + " --run
--level=" +
        boost::to_lower_copy(toString(md_level), locale::classic()) + "
--metadata=1.0"
        " --homehost=any";

A few years ago mdadm got a new feature to fight the "write hole" of RAID5:
journaling.
see https://www.kernel.org/doc/Documentation/md/raid5-cache.txt

I would like the test it (--add-journal), but it's not working:

# mdadm --detail /dev/md127
/dev/md127:
           Version : 1.0
     Creation Time : Tue Apr  7 22:39:34 2020
        Raid Level : raid5
        Array Size : 16776960 (16.00 GiB 17.18 GB)
     Used Dev Size : 8388480 (8.00 GiB 8.59 GB)
      Raid Devices : 3
...
    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc
       3       8       48        2      active sync   /dev/sdd

# mdadm --manage /dev/md127 --readonly --add-journal /dev/sde
mdadm: Failed to hot add /dev/sde as journal, please try restart /dev/md127.

# mdadm --stop /dev/md127
mdadm: stopped /dev/md127

# mdadm --assemble /dev/md127 /dev/sd[bcd]
mdadm: /dev/md127 has been started with 3 drives.

# mdadm --manage /dev/md127 --readonly --add-journal /dev/sde
mdadm: Failed to hot add /dev/sde as journal, please try restart /dev/md127.

------------
However, it works when metadata=1.2 (which is the default for mdadm)

tw2019:/home/dom # mdadm --stop /dev/md127
mdadm: stopped /dev/md127
tw2019:/home/dom # mdadm --create /dev/md127 --level=5 --raid-devices=3
/dev/sd[bcd]
mdadm: /dev/sdb appears to be part of a raid array:
       level=raid5 devices=3 ctime=Tue Apr  7 22:39:34 2020
mdadm: /dev/sdc appears to be part of a raid array:
       level=raid5 devices=3 ctime=Tue Apr  7 22:39:34 2020
mdadm: /dev/sdd appears to be part of a raid array:
       level=raid5 devices=3 ctime=Tue Apr  7 22:39:34 2020
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md127 started.

# mdadm --detail /dev/md127
/dev/md127:
           Version : 1.2
     Creation Time : Tue Apr  7 22:47:35 2020
        Raid Level : raid5
        Array Size : 16758784 (15.98 GiB 17.16 GB)
     Used Dev Size : 8379392 (7.99 GiB 8.58 GB)
      Raid Devices : 3
...
    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc
       3       8       48        2      active sync   /dev/sdd

# mdadm --manage /dev/md127 --readonly --add-journal /dev/sde
mdadm: Journal added successfully, making /dev/md127 read-write
mdadm: added /dev/sde

# mdadm --detail /dev/md127
/dev/md127:
           Version : 1.2
     Creation Time : Tue Apr  7 22:47:35 2020
        Raid Level : raid5
        Array Size : 16758784 (15.98 GiB 17.16 GB)
     Used Dev Size : 8379392 (7.99 GiB 8.58 GB)
      Raid Devices : 3
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Tue Apr  7 22:48:48 2020
             State : clean 
    Active Devices : 3
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : journal

              Name : tw2019:127  (local to host tw2019)
              UUID : 5dec6792:8109c604:970842a7:44f0d9e8
            Events : 19

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc
       3       8       48        2      active sync   /dev/sdd

       4       8       64        -      journal   /dev/sde

=> 
Use the default metadata (1.2) for new RAID devices.
For RAID1 there could be some reasons to stick with version 1.0.
See also :
yast-storage-ng-master/src/lib/y2storage/boot_requirements_strategies/uefi.rb
        # EFI in RAID can work, but it is not much reliable.
        # See bsc#1081578#c9, FATE#322485, FATE#314829
        # - RAID metadata must be somewhere where it will not interfere with
UEFI reading
        #   the disk. libstorage-ng currently uses "mdadm --metadata=1.0" which
is OK
        # - The partition boot flag must be on
        # - The partition RAID flag must be on (but setting it resets the boot
flag)


You are receiving this mail because: