Dennis, et al -- ...and then DennisG said... % % With the HUGE caveat that I have not studied your long thread and only % because it seems you are stuck . . . Heh. I am *so* seriously stuck. Thanks for the input! % % And assuming you don't care about the advantages of using GPT partitioning, % would be fine with the old BIOS method, want to use RAID to mirror, AND you % have a solid backup: I'd love to use GPT, because I like its flexibility, but I think I could happily live with MSDOS partitioning because I only need swap previous-version future-version data partitions, which fits. [Am I right? I don't need a dedicated /boot slice, do I?] And I shouldn't need a backup because I'm trying to install to the new 256G sde, keeping the 128G sda around only for migration later, at which point I will replace it with another 256G drive and finish the mirroring. I think I do want to mirror, but let's talk about that below ... % % You can use the following hack from the old days . . . (exactly how will % depend on where your production data lives now or what your backup method % is, but the gist is the same) . . . % % 1. Break the RAID We can skip this; not only is there no other half but I am installing from scratch. % 2. Re-partition an MBR disk % 3. Clear any grub code remnants from that disk's MBR block I'll rewrite sde cleanly and go back to the specific steps I saw last month or so to wipe any grub, md, or other data. It should be as easy as copying /dev/zero to the first, say, 10G of disk, but I promise to go reading first :-) % 4. Copy the generic /usr/lib/boot/master-boot-code to its MBR block How do I do this? The LEAP 15.2 installation should do that for me, right? Or do I do it manually after installation before booting or when booted and under chroot? % 5. Set the boot flag on the boot directory partition Would I have two boot flags since I have two slices that could hold OSes? % 6. Copy the data to the disk This is the install process, I hope. % 7. Do 2-5 on the other disk(s) I can use sfdisk to dump the sde partition table and copy it to the new sda and then randomize the GUIDs. That should get me ready to go, right? % 8. Build the array off the first disk, depending on RAID type/nbr disks in % array While we're talking about mirroring ... Should I make the entire device a mirror and partition the md, or should I create partitions and make each one a mirror? That is, is /dev/sde + /dev/sda # devices /dev/md0 # entire-drives mirror /dev/md0p1 # partition # swap /dev/md0p2 # partition # prev /dev/md0p3 # partition # next /dev/md0p4 # partition # data (although, I guess, this gives me the flexibility of a GPT label on the monolithic mirror device and I could have as many slices as I want inside) or /dev/sde + /dev/sda # devices /dev/sde1 + /dev/sda1 # identical partitions /dev/md1 # partition+partition mirror # swap /dev/sde2 + /dev/sda2 # identical partitions /dev/md2 # partition+partition mirror # prev /dev/sde3 + /dev/sda3 # identical partitions /dev/md3 # partition+partition mirror # next /dev/sde4 + /dev/sda4 # identical partitions /dev/md4 # partition+partition mirror # data better? I presume in either case I'd be writing (or LEAP install would write) grub2 boot code to the disk MBR on each, but I guess maybe to the MBR on the mirror device in the first case... [This presumes mirroring, of course, which you suggest is not necessary, but if I do mirror I want to follow whatever is the best approach.] % % This approach allows the choice of mirroring a boot directory or not, having % them be identical or not, booting individually or chain loading. There are % no sector pointers as ordinarily used by grub2. That sounds good, and maybe even so flexible that it's more complex than I need. I don't mind complexity, but I also need For-Dummies-level as I get farther and farther away from my IT-job-world days :-/ % % In my quick scan I didn't see discussion as to why you want a nested RAID No RAID5. I'm just RAID0 mirroring two boot drives -- although I had planned on mirroring individual partitions rather than entire devices. % setup. Do you really need such complexity? I variously used 0, 1, and 5 % years ago - but dropped RAID given SSD performance, because there are easier % ways to get redundancy, and because of the extra attention/overhead RAID can This is quite interesting to me. My goal is 1) to have fault tolerance in the event of a device failure so that 2) the machine stays running until I can plan a shutdown and a device swap. If I have backups and continuous sync and whatever than I kind of get #1, but I don't know anything other than RAID that will give me #2. I'm often away from home for a week at a time and can't have it down (not like it hasn't been down for I've-lost-track-how-long now, which is killing me ...). How would you ensure continuous uptime in the face of a failure? % require. On one box I have 4 instances of openSUSE on 4 disks with a mix of % GPT and MBR (2 of which are "mirrored" with rsync), plus a disk with W10. I I'll happily accept pointers to, or copies of, any scripts and cron jobs :-) % can boot from any disk individually or chain from any one to boot any one of % the others. I have no interest in loading lots of other candidate OSes (unles the whole "containerized" thing blows up, anyway ;-) but instead just want to be able to practice an upgrade migration on a copy ("next") while retaining the ability to go back to the old ("prev") instance. % % HTH. You've given me hope. THANK YOU! HAND :-D -- David T-G See http://justpickone.org/davidtg/email/ See http://justpickone.org/davidtg/tofu.txt