[opensuse] How to enable raid 1 in LEap 42.2?
Hello: I have installed openSUSE Leap 42.2 on a standalone hard disk. Then I connected to the system 2 other hard disks which have several raid 1 (mirror) arrays. When I boot Leap 42.2 it doesn't activate the raid arrays. How can I enable raid (mdraid) in Leap 42.2? In yast services manager I see mdadm-grow-continue@, mdadm-last-resort@, mdmon@, mdmonitor services, all of which are in disabled/inactive state. I don't know what they are for and which one I should enable (if any of them at all). Thanks, Istvan -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
01.07.2017 01:56, Istvan Gabor пишет:
Hello:
I have installed openSUSE Leap 42.2 on a standalone hard disk. Then I connected to the system 2 other hard disks which have several raid 1 (mirror) arrays. When I boot Leap 42.2 it doesn't activate the raid arrays. How can I enable raid (mdraid) in Leap 42.2?
What is the content of /etc/mdadm.conf?
In yast services manager I see mdadm-grow-continue@, mdadm-last-resort@, mdmon@, mdmonitor services, all of which are in disabled/inactive state. I don't know what they are for and which one I should enable (if any of them at all).
-- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Sat, 1 Jul 2017 09:31:10 +0300, Andrei Borzenkov wrote:
01.07.2017 01:56, Istvan Gabor пишет:
Hello:
I have installed openSUSE Leap 42.2 on a standalone hard disk. Then I connected to the system 2 other hard disks which have several raid 1 (mirror) arrays. When I boot Leap 42.2 it doesn't activate the raid arrays. How can I enable raid (mdraid) in Leap 42.2?
What is the content of /etc/mdadm.conf?
There is no /etc/mdadm.conf file yet. In this case I would expect that the raid devices are assembled with default names like md127 md126 etc. And --detail --scan outputs nothing. # mdadm --detail --scan # It seems to me that mdadm service is not running. Thanks, Istvan -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Istvan Gabor wrote:
On Sat, 1 Jul 2017 09:31:10 +0300, Andrei Borzenkov wrote:
01.07.2017 01:56, Istvan Gabor пишет:
Hello:
I have installed openSUSE Leap 42.2 on a standalone hard disk. Then I connected to the system 2 other hard disks which have several raid 1 (mirror) arrays. When I boot Leap 42.2 it doesn't activate the raid arrays. How can I enable raid (mdraid) in Leap 42.2?
What is the content of /etc/mdadm.conf?
There is no /etc/mdadm.conf file yet. In this case I would expect that the raid devices are assembled with default names like md127 md126 etc.
And --detail --scan outputs nothing.
# mdadm --detail --scan #
It seems to me that mdadm service is not running.
You probably need to load the modules. -- Per Jessen, Zürich (20.4°C) http://www.dns24.ch/ - free dynamic DNS, made in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Sat, 01 Jul 2017 12:03:35 +0200, Istvan Gabor wrote:
On Sat, 1 Jul 2017 09:31:10 +0300, Andrei Borzenkov wrote:
01.07.2017 01:56, Istvan Gabor пишет:
Hello:
I have installed openSUSE Leap 42.2 on a standalone hard disk. Then I connected to the system 2 other hard disks which have several raid 1 (mirror) arrays. When I boot Leap 42.2 it doesn't activate the raid arrays. How can I enable raid (mdraid) in Leap 42.2?
What is the content of /etc/mdadm.conf?
There is no /etc/mdadm.conf file yet. In this case I would expect that the raid devices are assembled with default names like md127 md126 etc.
I copied mdadm.conf file from the previous system to the current one. Now the arrays are assembled correctly. Either my knowldege on the necessity of mdadm.conf file was wrong or recent mdadm works differently than previous.
And --detail --scan outputs nothing.
# mdadm --detail --scan #
Now --detail --scan outputs the list of arrays. Does --detail --scan list only running arrays? What is the procedure for creating an mdadm.conf file if I don't have one? How can I discover possible arrays and include them in mdadm.conf if the arrays are not running? Thanks, Istvan -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Istvan Gabor wrote:
What is the procedure for creating an mdadm.conf file if I don't have one? How can I discover possible arrays and include them in mdadm.conf if the arrays are not running?
You could have googled that, but here it is: mdadm --examine --scan >> /etc/mdadm.conf -- Per Jessen, Zürich (20.2°C) http://www.hostsuisse.com/ - virtual servers, made in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 01/07/17 11:30, Per Jessen wrote:
Istvan Gabor wrote:
What is the procedure for creating an mdadm.conf file if I don't have one? How can I discover possible arrays and include them in mdadm.conf if the arrays are not running?
You could have googled that, but here it is:
mdadm --examine --scan >> /etc/mdadm.conf
Note that mdadm.conf is optional. I don't have one on my system. What you need is to enable mdadm in grub. I'm not too sure of the details off the top of my head, but iirc you want "domdadm" on the linux boot line, and you need mdadm support in grub. Otherwise, you may need to get the command "mdadm --assemble --scan" to run on every boot. Take a look at the raid wiki - it tells you how to set up a system manually. Cheers, Wol -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Sat, 1 Jul 2017 16:39:03 +0100, Wols Lists wrote:
On 01/07/17 11:30, Per Jessen wrote:
Istvan Gabor wrote:
What is the procedure for creating an mdadm.conf file if I don't have one? How can I discover possible arrays and include them in mdadm.conf if the arrays are not running?
You could have googled that, but here it is:
mdadm --examine --scan >> /etc/mdadm.conf
Note that mdadm.conf is optional. I don't have one on my system. What you need is to enable mdadm in grub. I'm not too sure of the details off the top of my head, but iirc you want "domdadm" on the linux boot line, and you need mdadm support in grub.
Otherwise, you may need to get the command "mdadm --assemble --scan" to run on every boot.
Thanks, I will check this when I will access that machine next time. Istvan -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Sat, 01 Jul 2017 12:30:17 +0200, Per Jessen wrote:
Istvan Gabor wrote:
What is the procedure for creating an mdadm.conf file if I don't have one? How can I discover possible arrays and include them in mdadm.conf if the arrays are not running?
You could have googled that, but here it is:
Indeed, your're right.
mdadm --examine --scan >> /etc/mdadm.conf
Thanks. (I thought that --examine only gives information on the file systems containing array parts, not on arrays themself.) Istvan -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Istvan Gabor wrote:
On Sat, 01 Jul 2017 12:03:35 +0200, Istvan Gabor wrote:
On Sat, 1 Jul 2017 09:31:10 +0300, Andrei Borzenkov wrote:
01.07.2017 01:56, Istvan Gabor пишет:
There is no /etc/mdadm.conf file yet. In this case I would expect that the raid devices are assembled with default names like md127 md126 etc.
I copied mdadm.conf file from the previous system to the current one. Now the arrays are assembled correctly. Either my knowldege on the necessity of mdadm.conf file was wrong or recent mdadm works differently than previous.
Are the new disk raids by any chance full disk raids, and/or are they not marked as raid in the partition table? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 01/07/17 12:36, Peter Suetterlin wrote:
Are the new disk raids by any chance full disk raids, and/or are they not marked as raid in the partition table?
What code is that :-) Seriously, there is no partition code for raid, iirc. Certainly not with GPT, and not with modern linux kernels either I don't think. Version 0 raid arrays were assembled by the kernel, but they are now obsolete. Version 1 arrays are assembled by mdadm, either by reading mdadm.conf if it exists, or by reading each partition looking for a superblock. That explains why the only supported way of booting off of raid without using an initramfs is either UEFI or a v1.0 mirror. The kernel boots read-only as if it was an ordinary non-raid disk, then replaces root with the assembled mirror. Cheers, Wol -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 2017-07-01 17:51, Wols Lists wrote:
On 01/07/17 12:36, Peter Suetterlin wrote:
Are the new disk raids by any chance full disk raids, and/or are they not marked as raid in the partition table?
What code is that :-) Seriously, there is no partition code for raid, iirc. Certainly not with GPT, and not with modern linux kernels either I don't think.
There is, but not used. From fdisk output: GPT: Command (m for help): l 1 EFI System C12A7328-F81F-11D2-BA4B-00A0C93EC93B 2 MBR partition scheme 024DEE41-33E7-11D3-9D69-0008C781F39F 3 Intel Fast Flash D3BFE2DE-3DAF-11DF-BA40-E3A556D89593 4 BIOS boot 21686148-6449-6E6F-744E-656564454649 5 Sony boot partition F4019732-066E-4E12-8273-346C5641494F 6 Lenovo boot partition BFBFAFE7-A34F-448A-9A5B-6213EB736C22 7 PowerPC PReP boot 9E1A2D38-C612-4316-AA26-8B49521E5A8B 8 ONIE boot 7412F7D5-A156-4B13-81DC-867174929325 9 ONIE config D4E6E2CD-4469-46F3-B5CB-1BFF57AFC149 10 Microsoft reserved E3C9E316-0B5C-4DB8-817D-F92DF00215AE 11 Microsoft basic data EBD0A0A2-B9E5-4433-87C0-68B6B72699C7 12 Microsoft LDM metadata 5808C8AA-7E8F-42E0-85D2-E1E90434CFB3 13 Microsoft LDM data AF9B60A0-1431-4F62-BC68-3311714A69AD 14 Windows recovery environment DE94BBA4-06D1-4D40-A16A-BFD50179D6AC 15 IBM General Parallel Fs 37AFFC90-EF7D-4E96-91C3-2D7AE055B174 16 Microsoft Storage Spaces E75CAF8F-F680-4CEE-AFA3-B001E56EFC2D 17 HP-UX data 75894C1E-3AEB-11D3-B7C1-7B03A0000000 18 HP-UX service E2A1E728-32E3-11D6-A682-7B03A0000000 19 Linux swap 0657FD6D-A4AB-43C4-84E5-0933C84B4F4F 20 Linux filesystem 0FC63DAF-8483-4772-8E79-3D69D8477DE4 21 Linux server data 3B8F8425-20E0-4F3B-907F-1A25A76F98E8 22 Linux root (x86) 44479540-F297-41B2-9AF7-D131D5F0458A 23 Linux root (ARM) 69DAD710-2CE4-4E3C-B16C-21A1D49ABED3 24 Linux root (x86-64) 4F68BCE3-E8CD-4DB1-96E7-FBCAF984B709 25 Linux root (ARM-64) B921B045-1DF0-41C3-AF44-4C6F280D3FAE 26 Linux root (IA-64) 993D8D3D-F80E-4225-855A-9DAF8ED7EA97 27 Linux reserved 8DA63339-0007-60C0-C436-083AC8230908 28 Linux home 933AC7E1-2EB4-4F13-B844-0E14E2AEF915 29 Linux RAID A19D880F-05FC-4D3B-A006-743F0F84911E <== 30 Linux extended boot BC13C2FF-59E6-4262-A352-B275FD6F7172 31 Linux LVM E6D6D379-F507-44C2-A23C-238F2A3DF928 32 FreeBSD data 516E7CB4-6ECF-11D6-8FF8-00022D09712B 33 FreeBSD boot 83BD6B9D-7F41-11DC-BE0B-001560B84F0F 34 FreeBSD swap 516E7CB5-6ECF-11D6-8FF8-00022D09712B 35 FreeBSD UFS 516E7CB6-6ECF-11D6-8FF8-00022D09712B 36 FreeBSD ZFS 516E7CBA-6ECF-11D6-8FF8-00022D09712B 37 FreeBSD Vinum 516E7CB8-6ECF-11D6-8FF8-00022D09712B 38 Apple HFS/HFS+ 48465300-0000-11AA-AA11-00306543ECAC 39 Apple UFS 55465300-0000-11AA-AA11-00306543ECAC 40 Apple RAID 52414944-0000-11AA-AA11-00306543ECAC 41 Apple RAID offline 52414944-5F4F-11AA-AA11-00306543ECAC 42 Apple boot 426F6F74-0000-11AA-AA11-00306543ECAC 43 Apple label 4C616265-6C00-11AA-AA11-00306543ECAC 44 Apple TV recovery 5265636F-7665-11AA-AA11-00306543ECAC 45 Apple Core storage 53746F72-6167-11AA-AA11-00306543ECAC 46 Solaris boot 6A82CB45-1DD2-11B2-99A6-080020736631 47 Solaris root 6A85CF4D-1DD2-11B2-99A6-080020736631 48 Solaris /usr & Apple ZFS 6A898CC3-1DD2-11B2-99A6-080020736631 49 Solaris swap 6A87C46F-1DD2-11B2-99A6-080020736631 50 Solaris backup 6A8B642B-1DD2-11B2-99A6-080020736631 51 Solaris /var 6A8EF2E9-1DD2-11B2-99A6-080020736631 52 Solaris /home 6A90BA39-1DD2-11B2-99A6-080020736631 53 Solaris alternate sector 6A9283A5-1DD2-11B2-99A6-080020736631 54 Solaris reserved 1 6A945A3B-1DD2-11B2-99A6-080020736631 55 Solaris reserved 2 6A9630D1-1DD2-11B2-99A6-080020736631 56 Solaris reserved 3 6A980767-1DD2-11B2-99A6-080020736631 57 Solaris reserved 4 6A96237F-1DD2-11B2-99A6-080020736631 58 Solaris reserved 5 6A8D2AC7-1DD2-11B2-99A6-080020736631 59 NetBSD swap 49F48D32-B10E-11DC-B99B-0019D1879648 60 NetBSD FFS 49F48D5A-B10E-11DC-B99B-0019D1879648 61 NetBSD LFS 49F48D82-B10E-11DC-B99B-0019D1879648 62 NetBSD concatenated 2DB519C4-B10E-11DC-B99B-0019D1879648 63 NetBSD encrypted 2DB519EC-B10E-11DC-B99B-0019D1879648 64 NetBSD RAID 49F48DAA-B10E-11DC-B99B-0019D1879648 65 ChromeOS kernel FE3A2A5D-4F32-41A7-B725-ACCC3285A309 66 ChromeOS root fs 3CB8E202-3B7E-47DD-8A3C-7FF2A13CFCEC 67 ChromeOS reserved 2E0A753D-9E48-43B0-8337-B15192CB1B5E 68 MidnightBSD data 85D5E45A-237C-11E1-B4B3-E89A8F7FC3A7 69 MidnightBSD boot 85D5E45E-237C-11E1-B4B3-E89A8F7FC3A7 70 MidnightBSD swap 85D5E45B-237C-11E1-B4B3-E89A8F7FC3A7 71 MidnightBSD UFS 0394EF8B-237E-11E1-B4B3-E89A8F7FC3A7 72 MidnightBSD ZFS 85D5E45D-237C-11E1-B4B3-E89A8F7FC3A7 73 MidnightBSD Vinum 85D5E45C-237C-11E1-B4B3-E89A8F7FC3A7 74 Ceph Journal 45B0969E-9B03-4F30-B4C6-B4B80CEFF106 75 Ceph Encrypted Journal 45B0969E-9B03-4F30-B4C6-5EC00CEFF106 76 Ceph OSD 4FBD7E29-9D25-41B8-AFD0-062C0CEFF05D 77 Ceph crypt OSD 4FBD7E29-9D25-41B8-AFD0-5EC00CEFF05D 78 Ceph disk in creation 89C57F98-2FE5-4DC0-89C1-F3AD0CEFF2BE 79 Ceph crypt disk in creation 89C57F98-2FE5-4DC0-89C1-5EC00CEFF2BE 80 OpenBSD data 824CC7A0-36A8-11E3-890A-952519AD3F61 81 QNX6 file system CEF5A9AD-73BC-4601-89F3-CDEEEEE321A1 82 Plan 9 partition C91818F9-8025-47AF-89D2-F030D7000C2C msdos: Command (m for help): l 0 Empty 24 NEC DOS 81 Minix / old Lin bf Solaris 1 FAT12 27 Hidden NTFS Win 82 Linux swap / So c1 DRDOS/sec (FAT- 2 XENIX root 39 Plan 9 83 Linux c4 DRDOS/sec (FAT- 3 XENIX usr 3c PartitionMagic 84 OS/2 hidden or c6 DRDOS/sec (FAT- 4 FAT16 <32M 40 Venix 80286 85 Linux extended c7 Syrinx 5 Extended 41 PPC PReP Boot 86 NTFS volume set da Non-FS data 6 FAT16 42 SFS 87 NTFS volume set db CP/M / CTOS / . 7 HPFS/NTFS/exFAT 4d QNX4.x 88 Linux plaintext de Dell Utility 8 AIX 4e QNX4.x 2nd part 8e Linux LVM df BootIt 9 AIX bootable 4f QNX4.x 3rd part 93 Amoeba e1 DOS access a OS/2 Boot Manag 50 OnTrack DM 94 Amoeba BBT e3 DOS R/O b W95 FAT32 51 OnTrack DM6 Aux 9f BSD/OS e4 SpeedStor c W95 FAT32 (LBA) 52 CP/M a0 IBM Thinkpad hi ea Rufus alignment e W95 FAT16 (LBA) 53 OnTrack DM6 Aux a5 FreeBSD eb BeOS fs f W95 Ext'd (LBA) 54 OnTrackDM6 a6 OpenBSD ee GPT 10 OPUS 55 EZ-Drive a7 NeXTSTEP ef EFI (FAT-12/16/ 11 Hidden FAT12 56 Golden Bow a8 Darwin UFS f0 Linux/PA-RISC b 12 Compaq diagnost 5c Priam Edisk a9 NetBSD f1 SpeedStor 14 Hidden FAT16 <3 61 SpeedStor ab Darwin boot f4 SpeedStor 16 Hidden FAT16 63 GNU HURD or Sys af HFS / HFS+ f2 DOS secondary 17 Hidden HPFS/NTF 64 Novell Netware b7 BSDI fs fb VMware VMFS 18 AST SmartSleep 65 Novell Netware b8 BSDI swap fc VMware VMKCORE 1b Hidden W95 FAT3 70 DiskSecure Mult bb Boot Wizard hid fd *Linux* *raid* *auto* 1c Hidden W95 FAT3 75 PC/IX bc Acronis FAT32 L fe LANstep 1e Hidden W95 FAT1 80 Old Minix be Solaris boot ff BBT -- Cheers / Saludos, Carlos E. R. (from 42.2 x86_64 "Malachite" at Telcontar)
On 07/01/2017 10:51 AM, Wols Lists wrote:
On 01/07/17 12:36, Peter Suetterlin wrote:
Are the new disk raids by any chance full disk raids, and/or are they not marked as raid in the partition table?
What code is that :-) Seriously, there is no partition code for raid, iirc. Certainly not with GPT, and not with modern linux kernels either I don't think.
Version 0 raid arrays were assembled by the kernel, but they are now obsolete. Version 1 arrays are assembled by mdadm, either by reading mdadm.conf if it exists, or by reading each partition looking for a superblock.
That explains why the only supported way of booting off of raid without using an initramfs is either UEFI or a v1.0 mirror. The kernel boots read-only as if it was an ordinary non-raid disk, then replaces root with the assembled mirror.
Cheers, Wol
See: https://wiki.archlinux.org/index.php/RAID Create the Partition Table (GPT) It is highly recommended to pre-partition the disks to be used in the array. Since most RAID users are selecting HDDs >2 TB, GPT partition tables are required and recommended. Disks are easily partitioned using gptfdisk. After created, the partition type should be assigned hex code FD00. If a larger disk array is employed, consider assigning disk labels or partition labels to make it easier to identify an individual disk later. Creating partitions that are of the same size on each of the devices is preferred. A good tip is to leave approx 100 MB at the end of the device when partitioning. See below for rationale. Partitions Types for (MBR) For those creating partitions on HDDs with a MBR partition table, the partition types available for use are: 0xDA (for non-fs data -- **current recommendation by kernel.org**) 0xFD (for raid autodetect arrays -- was useful before booting an initrd to load kernel modules) -- David C. Rankin, J.D.,P.E. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 2017-07-03 08:10, David C. Rankin wrote:
See:
https://wiki.archlinux.org/index.php/RAID
Create the Partition Table (GPT)
It is highly recommended to pre-partition the disks to be used in the array. Since most RAID users are selecting HDDs >2 TB, GPT partition tables are required and recommended. Disks are easily partitioned using gptfdisk.
After created, the partition type should be assigned hex code FD00. If a larger disk array is employed, consider assigning disk labels or partition labels to make it easier to identify an individual disk later. Creating partitions that are of the same size on each of the devices is preferred. A good tip is to leave approx 100 MB at the end of the device when partitioning. See below for rationale.
Huh? Let's see: «When replacing a failed disk of a RAID, the new disk has to be exactly the same size as the failed disk or bigger — otherwise the array recreation process will not work. Even hard drives of the same manufacturer and model can have small size differences. By leaving a little space at the end of the disk unallocated one can compensate for the size differences between drives, which makes choosing a replacement drive model easier. Therefore, it is good practice to leave about 100 MB of unallocated space at the end of the disk.» Curious! It also says: «Note: It is also possible to create a RAID directly on the raw disks (without partitions), but not recommended because it can cause problems when swapping a failed disk.» -- Cheers / Saludos, Carlos E. R. (from 42.2 x86_64 "Malachite" at Telcontar)
On 03/07/17 14:34, Carlos E. R. wrote:
On 2017-07-03 08:10, David C. Rankin wrote:
See:
Not from the horse's mouth, I notice ... :-) (Yes I do know Arch has a good reputation :-)
Create the Partition Table (GPT)
It is highly recommended to pre-partition the disks to be used in the array. Since most RAID users are selecting HDDs >2 TB, GPT partition tables are required and recommended. Disks are easily partitioned using gptfdisk.
After created, the partition type should be assigned hex code FD00.
I think you will find this is cargo cult monkey see monkey do :-)
If a larger disk array is employed, consider assigning disk labels or partition labels to make it easier to identify an individual disk later. Creating partitions that are of the same size on each of the devices is preferred. A good tip is to leave approx 100 MB at the end of the device when partitioning. See below for rationale.
Huh?
Let's see:
«When replacing a failed disk of a RAID, the new disk has to be exactly the same size as the failed disk or bigger — otherwise the array recreation process will not work. Even hard drives of the same manufacturer and model can have small size differences. By leaving a little space at the end of the disk unallocated one can compensate for the size differences between drives, which makes choosing a replacement drive model easier. Therefore, it is good practice to leave about 100 MB of unallocated space at the end of the disk.»
Curious!
This is real, and does cause problems ... a 3TB disk (for example) is guaranteed to be *at least* three million million bytes. But depending on manufacturing (now that Cylinders, Heads, Sectors is just a fiction) there's no constraint on how much (or little) over that figure is acceptable. So disks do vary ... a case occurred on the raid list maybe six months ago?
It also says:
«Note: It is also possible to create a RAID directly on the raw disks (without partitions), but not recommended because it can cause problems when swapping a failed disk.»
Given that the guy who WROTE a large chunk of the raid code uses raw disks as a matter of course, I'm afraid I wouldn't give much credence to that. Where it does matter, is you cannot *boot* off a partition on a raw raid disk. Pretty much all boot code relies on an MBR or GPT to locate the OS. The other reason for not using raw disks is a lot of disk tools assume "no gpt/mbr == blank disk". You don't want an install CD to stomp all over your raid because it assumed you weren't using that disk. All this has been discussed "recently" on the linux raid list, and I could probably find it in my archive, but with three years to search that's a lot of emails ... :-) Cheers, Wol -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Mon, Jul 3, 2017 at 5:37 PM, Wols Lists <antlists@youngman.org.uk> wrote:
Where it does matter, is you cannot *boot* off a partition on a raw raid disk. Pretty much all boot code relies on an MBR or GPT to locate the OS.
Technically it is possible by offsetting start of data when creating RAID. Then you have enough space to store anything you like, including code that can access Linux MD to "locate the OS". -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 03/07/17 15:48, Andrei Borzenkov wrote:
On Mon, Jul 3, 2017 at 5:37 PM, Wols Lists <antlists@youngman.org.uk> wrote:
Where it does matter, is you cannot *boot* off a partition on a raw raid disk. Pretty much all boot code relies on an MBR or GPT to locate the OS.
Technically it is possible by offsetting start of data when creating RAID. Then you have enough space to store anything you like, including code that can access Linux MD to "locate the OS".
Mmmmm ... and what code would that be :-) Yes it is possible. But both grub and uefi require a partition, and the raid superblocks v1.1 and v1.2 are both stored at the start of the disk where your code is likely to stomp all over them. v1.0 on the other hand is stored at the end, but also uses an offset of 0 so your code will stomp all over the start of the data. Plus, anything that does a reshape will change the offset, risking stomping all over your code. So yes, it could be done. It would not be wise, and would involve writing a new boot loader ... :-) Cheers, Wol -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
03.07.2017 18:24, Wols Lists пишет:
Plus, anything that does a reshape will change the offset, risking stomping all over your code.
Yes, that's good point. Although in general modifying data layout on boot disk is potentially dangerous, so there is nothing new. And we are now rather far from original "boot code relies on an MBR or GPT to locate the OS" which is simply the wrong statement, even if there are other valid reasons for not installing bootloader on unpartitioned disks with Linux MD. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Mon, 3 Jul 2017 17:48:52 +0300 Andrei Borzenkov <arvidjaar@gmail.com> wrote:
On Mon, Jul 3, 2017 at 5:37 PM, Wols Lists <antlists@youngman.org.uk> wrote:
Where it does matter, is you cannot *boot* off a partition on a raw raid disk. Pretty much all boot code relies on an MBR or GPT to locate the OS.
Technically it is possible by offsetting start of data when creating RAID. Then you have enough space to store anything you like, including code that can access Linux MD to "locate the OS".
I did once use raw disks for a RAID and in my case I learned it is all too easy to overwrite the start of it by accident. So afterwards I use a formatted disk, with a small partition at the beginning and a small partition at the end, and a large partition in the middle of an exactly defined size to hold the RAID. The small partitions can be used for system backups or the like, or just left as insurance against accidents and future disk size differences. It's a lot simpler and easier for others to understand than exploiting every last feature of the software, IMHO. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Sat, 1 Jul 2017 12:36:35 +0100, Peter Suetterlin wrote:
Istvan Gabor wrote:
On Sat, 01 Jul 2017 12:03:35 +0200, Istvan Gabor wrote:
On Sat, 1 Jul 2017 09:31:10 +0300, Andrei Borzenkov wrote:
01.07.2017 01:56, Istvan Gabor пишет:
There is no /etc/mdadm.conf file yet. In this case I would expect that the raid devices are assembled with default names like md127 md126 etc.
I copied mdadm.conf file from the previous system to the current one. Now the arrays are assembled correctly. Either my knowldege on the necessity of mdadm.conf file was wrong or recent mdadm works differently than previous.
Are the new disk raids by any chance full disk raids, and/or are they not marked as raid in the partition table?
No, not full disk raids. Two identical hard disks, each have several partitions, and the partitions are arranged to arrays. I can't recall how the partitions are marked in the partition table but I guess it is irrelevant, mdadm doesn't use that info. Thanks, Istvan -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
01.07.2017 13:03, Istvan Gabor пишет:
On Sat, 1 Jul 2017 09:31:10 +0300, Andrei Borzenkov wrote:
01.07.2017 01:56, Istvan Gabor пишет:
Hello:
I have installed openSUSE Leap 42.2 on a standalone hard disk. Then I connected to the system 2 other hard disks which have several raid 1 (mirror) arrays. When I boot Leap 42.2 it doesn't activate the raid arrays. How can I enable raid (mdraid) in Leap 42.2?
What is the content of /etc/mdadm.conf?
There is no /etc/mdadm.conf file yet. In this case I would expect that the raid devices are assembled with default names like md127 md126 etc.
Yes, me too. Quickly testing in VM - it does assemble array without mdadm.conf (foreign devices are assembled in auto-read-only mode).
And --detail --scan outputs nothing.
# mdadm --detail --scan #
It seems to me that mdadm service is not running.
There is no such thing as "mdadm service". -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Sat, 1 Jul 2017 14:44:19 +0300, Andrei Borzenkov wrote:
01.07.2017 13:03, Istvan Gabor пишет:
On Sat, 1 Jul 2017 09:31:10 +0300, Andrei Borzenkov wrote:
01.07.2017 01:56, Istvan Gabor пишет:
Hello:
I have installed openSUSE Leap 42.2 on a standalone hard disk. Then I connected to the system 2 other hard disks which have several raid 1 (mirror) arrays. When I boot Leap 42.2 it doesn't activate the raid arrays. How can I enable raid (mdraid) in Leap 42.2?
What is the content of /etc/mdadm.conf?
There is no /etc/mdadm.conf file yet. In this case I would expect that the raid devices are assembled with default names like md127 md126 etc.
Yes, me too. Quickly testing in VM - it does assemble array without mdadm.conf (foreign devices are assembled in auto-read-only mode).
And --detail --scan outputs nothing.
# mdadm --detail --scan #
It seems to me that mdadm service is not running.
There is no such thing as "mdadm service".
OK, I understand. What triggers then recognition and assembly of the arrays at boot? Wols writes that kernel boot parameters and grub but I never edited them. You wrote earlier that is should be fully automatic by udev rules. Might this be an udev issue, similar I had before? https://lists.opensuse.org/opensuse/2017-01/msg00438.html This time it's better, because at least mdadm.conf triggers array assembly. Thanks, Istvan -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
02.07.2017 12:58, Istvan Gabor пишет:
It seems to me that mdadm service is not running.
There is no such thing as "mdadm service".
OK, I understand. What triggers then recognition and assembly of the arrays at boot? Wols
This is done by udev rules. First every new block device is scanned for known signatures; if it is found to be part of Linux MD array, "mdadm --assemble --incremental" is run. There is extra timer that is started that forces assembly if all components did not appear. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 02/07/17 12:55, Andrei Borzenkov wrote:
02.07.2017 12:58, Istvan Gabor пишет:
It seems to me that mdadm service is not running.
There is no such thing as "mdadm service".
OK, I understand. What triggers then recognition and assembly of the arrays at boot? Wols
This is done by udev rules. First every new block device is scanned for known signatures; if it is found to be part of Linux MD array, "mdadm --assemble --incremental" is run. There is extra timer that is started that forces assembly if all components did not appear.
And if you didn't tell SuSE about the raid arrays, it's possible that those udev rules haven't been installed. How to check/fix that, I don't know, though. Cheers, Wol -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 07/01/2017 06:44 AM, Andrei Borzenkov wrote:
It seems to me that mdadm service is not running.
There is no such thing as "mdadm service".
Well, there is a monitor service, e.g. # When used in --follow (aka --monitor) mode, mdadm needs a # mail address and/or a program. To start mdadm's monitor mode, enable # mdadm.service in systemd. # # If the lines are not found, mdadm will exit quietly #MAILADDR root@mydomain.tld #PROGRAM /usr/sbin/handle-mdadm-events MAILADDR you@yourhost.tld If MAILDIR isn't set mdadm.service will refuse to start. -- David C. Rankin, J.D.,P.E. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
David C. Rankin wrote:
On 07/01/2017 06:44 AM, Andrei Borzenkov wrote:
It seems to me that mdadm service is not running.
There is no such thing as "mdadm service".
Well, there is a monitor service, e.g.
# When used in --follow (aka --monitor) mode, mdadm needs a # mail address and/or a program. To start mdadm's monitor mode, # enable mdadm.service in systemd. # # If the lines are not found, mdadm will exit quietly #MAILADDR root@mydomain.tld #PROGRAM /usr/sbin/handle-mdadm-events MAILADDR you@yourhost.tld
If MAILDIR isn't set mdadm.service will refuse to start.
I'm not sure where the above is from, but that service used be called "mdadmd", and in Leap it's called mdmonitor.service. By default, mails are sent to root on the local system. It's very useful, we use it on all our external systems that all use RAID1. -- Per Jessen, Zürich (15.5°C) http://www.cloudsuisse.com/ - your owncloud, hosted in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
participants (8)
-
Andrei Borzenkov
-
Carlos E. R.
-
Dave Howorth
-
David C. Rankin
-
Istvan Gabor
-
Per Jessen
-
Peter Suetterlin
-
Wols Lists