In data giovedì 19 novembre 2020 12:48:28 CET, David T-G ha scritto:
Stakanov --
...and then Stakanov said... % % Situation: % I have a 2 GB disc mounted as /var/lib/libvirt/images. ... % So I took a second one, same size to set up a RAID1 % ... % % So I want to create a RAID called md/1 and use it as RAID setting it up with % one "missing". dev/sdd is the one with the data. /dev/sdf is the virgin new. % % I began with: % dd if=/dev/sdd of=/dev/sdf count=1
Why are you bothering with this? First, you have no particular reason to duplicate the partition table. Second, you should be using something like sgdisk --backup or sfdisk -d to dump the table if that's what you really wanted to do (which, again, I think it isn't here).
Personally, I like to make a partition of almost the entire disk for my RAIDed content and then leave a tiny slice at the end where I write data about the disk and RAID set. I even go so far as to format each disk's tiny slice with different filesystems, but that's just me :-) Whether you use a partition or the whole device, though, it's simple enough just to set that up manually with fdisk or gpt rather than duplicating something that wasn't meant to hold RAID content in the first place.
... % % Now I wish to give it a file system (in my case ext4) and there...the trouble % begin. I do not remember obviously how to do this right. I tried: % mkfs.ext4 /dev/md/1 [snip]
Most folks prefer to create a valid partition table on the new device and then put a filesystem on that. Try something like
fdisk /dev/md/1
to set up your partitions and then
mkfs.ext4 /dev/md/1p1
to create a filesystem on the new partition. [Actual results may vary, of course; do not just mouse-n-paste :-]
Ok, I did it and it worked. So how to take a single disk and transform it with mdadm into a Raid1 without having to restore it from backed up data (which should be there BTW, in case things go South on you). I followed mainly: https://asergo.com/knowledge-base/bare-metal-servers/raid/ mdadm.html sudo parted --list gives you an overlook over you partitions shouldn't you know. sudo lsblk gives you the overlook over all disc name and mount points. With this info at hand, and in my case not a root file system or a bootable partition: $ sudo sgdisk /dev/sda --replicate /dev/sdb The operation has completed successfully. $ sudo sgdisk --randomize-guids /dev/sdb The operation has completed successfully. The first to clone the partition the second not to have the same UUID. (sdb or sda are here arbitrary, in my example I worked with sdd and sdf so it is really just to be adapted to the usecase) $ sudo mdadm --create /dev/md0 --level=1 --raid-devices=2 missing /dev/sdb2 creates a RAID device (in my case it was md1 because I had already a RAID for /home in place). And this RAID device is purportedly "defective" that is missing of one disc. (disc which is still full of the data we wish to transfer and save - yes, of course you have a valid backup - if you have the money and the space). This will give you the following output: mdadm: Note: this array has metadata at the start and may not be suitable as a boot device. If you plan to store '/boot' on this device please ensure that your boot-loader understands md/v1.x metadata, or use --metadata=0.90 Continue creating array? y mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md0 started. I wanted an EXT4 filesystem as the "original disc" so: $ sudo mkfs.ext4 /dev/md0 mke2fs 1.44.1 (24-Mar-2018) Creating filesystem with 73208576 4k blocks and 18309120 inodes Filesystem UUID: ff25d882-f65b-4e0e-ad49-f8d9756a0f89 Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616 Allocating group tables: done Writing inode tables: done Creating journal (262144 blocks): done Writing superblocks and filesystem accounting information: done Not that this time I did not run into the alignment error. Making a mount point and mounting the "newraid" with a temporary name. $ sudo mkdir /mnt/new-raid $ sudo mount /dev/md0 /mnt/new-raid Now the author of the guide uses rsync but for some reasons I did ran into "broken pipe". $ sudo rsync -auHxv --exclude={"/proc/*","/sys/*","/mnt/*"} /* /mnt/new-raid/ I think I did some syntax error, however it was also quite slow so I preferred sudo cp -ax /directory mounted old disc /new-raid waited to be copied, which in my case with OS for KVM took some time nevertheless. Once done, I mounted the new RAID under the original name and tried it out with KVM. Considered all was as expected, then I decided to complete the action and to erase the old data on the disc and then copied the partition table over from the new Raid: $ sudo wipefs --all /dev/sda $ sudo sgdisk /dev/sdb --replicate /dev/sda The operation has completed successfully. Finally I added it and it began to sync: $ sudo mdadm /dev/md127 --add /dev/sda2 mdadm: added /dev/sda2 Final step: proof the status of the RAID you have again with sudo cat /proc/mdadm All Raid should be healthy, UU and all devices should be in use. With a root file system this is more complicated as you can read in the guide but for a normal disc it was quite straightforward (as long as you take note of the partition numbers and names).