Hi, all -- It's long past time diskfarm:~ # cat /etc/os-release NAME="openSUSE Leap" VERSION="15.2" ID="opensuse-leap" ID_LIKE="suse opensuse" VERSION_ID="15.2" PRETTY_NAME="openSUSE Leap 15.2" ANSI_COLOR="0;32" CPE_NAME="cpe:/o:opensuse:leap:15.2" BUG_REPORT_URL="https://bugs.opensuse.org" HOME_URL="https://www.opensuse.org/" to upgrade diskfarm. It is also, coincidentally, time to add a mirror drive, which of course one can't do on the running system. Soooo ... Can I *upgrade* my system to another partition, like one on the new mirror? Or how (rsync? cp -a? cpio? tar? ...) can I copy the old partition content to a new slice of slightly different size (around the RAID metadata) to then boot from it instead, and then update? [I actually have two identical slices on the drive and use dd to copy over as a backup before upgrading, but dd won't work here.] I am, of course, seeking the simplicity of an upgrade while retaining the security of falling back to the old installation if needed, all in the same timeframe as adding the mirror because, well, duh :-) TIA & HAND :-D -- David T-G See http://justpickone.org/davidtg/email/ See http://justpickone.org/davidtg/tofu.txt
On 09.08.2022 18:20, David T-G wrote:
Hi, all --
It's long past time
diskfarm:~ # cat /etc/os-release NAME="openSUSE Leap" VERSION="15.2" ID="opensuse-leap" ID_LIKE="suse opensuse" VERSION_ID="15.2" PRETTY_NAME="openSUSE Leap 15.2" ANSI_COLOR="0;32" CPE_NAME="cpe:/o:opensuse:leap:15.2" BUG_REPORT_URL="https://bugs.opensuse.org" HOME_URL="https://www.opensuse.org/"
to upgrade diskfarm. It is also, coincidentally, time to add a mirror drive, which of course one can't do on the running system. Soooo ...
Actually with btrfs (which is default on openSUSE) you can replace drive with different drive, effectively moving filesystem content to different drive, online
Can I *upgrade* my system to another partition, like one on the new mirror? Or how (rsync? cp -a? cpio? tar? ...) can I copy the old partition content to a new slice of slightly different size (around the RAID metadata) to then boot from it instead, and then update? [I actually have two identical slices on the drive and use dd to copy over as a backup before upgrading, but dd won't work here.]
I am, of course, seeking the simplicity of an upgrade while retaining the security of falling back to the old installation if needed, all in the same timeframe as adding the mirror because, well, duh :-)
TIA & HAND
:-D
Le 09/08/2022 à 18:01, Andrei Borzenkov a écrit :
Actually with btrfs (which is default on openSUSE) you can replace drive with different drive, effectively moving filesystem content to different drive, online
yes, I already did it and it works very well (create the second partition as BTRFS, join the two, balance and remove the old one - just a summary) there is even a btrfs raid, but I didn't test this one have to know why this RAID is made for jdd -- http://dodin.org http://valeriedodin.com
jdd@dodin.org wrote:
Le 09/08/2022 à 18:01, Andrei Borzenkov a écrit :
Actually with btrfs (which is default on openSUSE) you can replace drive with different drive, effectively moving filesystem content to different drive, online
yes, I already did it and it works very well (create the second partition as BTRFS, join the two, balance and remove the old one - just a summary)
I think he talks about btrfs replace start /dev/md0 /dev/md2 / which effectively moves the whole FS from one device to another. Assuming the new partition is larger, followed by an btrfs filesystem resize 1:max / Used that to move the system of our server from an SDD RAID to an NVME raid. If you do that, remember to re-make the initrds to include NVME drivers.... :-P (of course I didn't...)
Andrei, et al -- On 2022-08-09 11:01, Andrei Borzenkov wrote: % % On 09.08.2022 18:20, David T-G wrote: % > ... % > to upgrade diskfarm. It is also, coincidentally, time to add a mirror % > drive, which of course one can't do on the running system. Soooo ... % % Actually with btrfs (which is default on openSUSE) you can replace drive % with different drive, effectively moving filesystem content to different % drive, online [snip] Ah, darn. I wondered, right after sending the message, if I should have mentioned filesystem details. I'm using partitions diskfarm:~ # parted /dev/sda u MiB p Model: ATA SanDisk SD6SB1M1 (scsi) Disk /dev/sda: 122104MiB Sector size (logical/physical): 512B/512B Partition Table: gpt Disk Flags: pmbr_boot Number Start End Size File system Name Flags 1 1.00MiB 33793MiB 33792MiB linux-swap(v1) diskfarm-swap swap 2 33793MiB 66561MiB 32768MiB xfs diskfarmsuse 3 66561MiB 99329MiB 32768MiB diskfarmknop legacy_boot 4 99329MiB 122104MiB 22775MiB xfs diskfarm-ssd on a single disk with xfs diskfarm:~ # mount | egrep sda /dev/sda2 on / type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota) /dev/sda4 on /mnt/ssd type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota) filesystems. No btrfs here. So 1) can I upgrade to an empty target, or 2) what's the best non-dd way to copy to a different-sized partition? TIA again :-D -- David T-G See http://justpickone.org/davidtg/email/ See http://justpickone.org/davidtg/tofu.txt :-D -- David T-G See http://justpickone.org/davidtg/email/ See http://justpickone.org/davidtg/tofu.txt
On 2022-08-09 21:44, David T-G wrote:
Andrei, et al --
On 2022-08-09 11:01, Andrei Borzenkov wrote: % % On 09.08.2022 18:20, David T-G wrote: % > ... % > to upgrade diskfarm. It is also, coincidentally, time to add a mirror % > drive, which of course one can't do on the running system. Soooo ... % % Actually with btrfs (which is default on openSUSE) you can replace drive % with different drive, effectively moving filesystem content to different % drive, online [snip]
Ah, darn. I wondered, right after sending the message, if I should have mentioned filesystem details.
I'm using partitions
diskfarm:~ # parted /dev/sda u MiB p Model: ATA SanDisk SD6SB1M1 (scsi) Disk /dev/sda: 122104MiB Sector size (logical/physical): 512B/512B Partition Table: gpt Disk Flags: pmbr_boot
Number Start End Size File system Name Flags 1 1.00MiB 33793MiB 33792MiB linux-swap(v1) diskfarm-swap swap 2 33793MiB 66561MiB 32768MiB xfs diskfarmsuse 3 66561MiB 99329MiB 32768MiB diskfarmknop legacy_boot 4 99329MiB 122104MiB 22775MiB xfs diskfarm-ssd
on a single disk with xfs
diskfarm:~ # mount | egrep sda /dev/sda2 on / type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota) /dev/sda4 on /mnt/ssd type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota)
filesystems. No btrfs here.
So 1) can I upgrade to an empty target,
Yes, but you have to create the partitions.
or 2) what's the best non-dd way to copy to a different-sized partition?
rsync. Although, XFS has its own utilities to clone partitions, and if raid is there, there are other methods. I'm partial to rsync, because you get a new filesystem with new settings. But it is offline, and slow. OPTIONS="--archive --acls --xattrs --hard-links --sparse --stats --human-readable --del " rsync $OPTIONS /from/ /to -- Cheers / Saludos, Carlos E. R. (from 15.3 x86_64 at Telcontar)
On 09.08.2022 22:44, David T-G wrote:
So 1) can I upgrade to an empty target,
I would call it "migrate", but yes, of course you can.
or 2) what's the best non-dd way to copy to a different-sized partition?
You can use any tool that preserves hard links, tar, rsync, cpio all do it. After copying filesystem content you will need to configure bootloader to boot from new partition. That does not happen automagically.
David T-G wrote:
Hi, all --
It's long past time
diskfarm:~ # cat /etc/os-release NAME="openSUSE Leap" VERSION="15.2" ID="opensuse-leap" ID_LIKE="suse opensuse" VERSION_ID="15.2" PRETTY_NAME="openSUSE Leap 15.2" ANSI_COLOR="0;32" CPE_NAME="cpe:/o:opensuse:leap:15.2" BUG_REPORT_URL="https://bugs.opensuse.org" HOME_URL="https://www.opensuse.org/"
to upgrade diskfarm. It is also, coincidentally, time to add a mirror drive, which of course one can't do on the running system.
I think you can - with mdraid, you can create a faulty mirror with one drive, then add a mirror later. I feel certain I have done that in the past. -- Per Jessen, Zürich (21.8°C)
On 8/9/22 14:59, Per Jessen wrote:
I think you can - with mdraid, you can create a faulty mirror with one drive, then add a mirror later. I feel certain I have done that in the past.
You can, you just specify a drive as "missing" on the command line when you create the array. Then migrate/install 15.4 to the array and "add" the 15.2 drive to the array later. You will want to wipe the 15.2 drive (to remove any superblock or GPT partition entries) before adding that drive to the array. If I recall correctly, the wipe is needed with GPT because it writes entries to both the beginning and end of the drive which can cause problems adding the drive to an array. -- David C. Rankin, J.D.,P.E.
On 2022-08-09 22:51, David C. Rankin wrote:
On 8/9/22 14:59, Per Jessen wrote:
I think you can - with mdraid, you can create a faulty mirror with one drive, then add a mirror later. I feel certain I have done that in the past.
You can, you just specify a drive as "missing" on the command line when you create the array. Then migrate/install 15.4 to the array and "add" the 15.2 drive to the array later.
You will want to wipe the 15.2 drive (to remove any superblock or GPT partition entries) before adding that drive to the array. If I recall correctly, the wipe is needed with GPT because it writes entries to both the beginning and end of the drive which can cause problems adding the drive to an array.
Rather than wipe it entirely, which is slow, and in the case of SSD causes wear, better to calculate what to delete at the start and end. How many sectors. -- Cheers / Saludos, Carlos E. R. (from 15.3 x86_64 at Telcontar)
Carlos E. R. composed on 2022-08-09 22:54 (UTC+0200):
David C. Rankin wrote:
You will want to wipe the 15.2 drive (to remove any superblock or GPT partition entries) before adding that drive to the array. If I recall correctly, the wipe is needed with GPT because it writes entries to both the beginning and end of the drive which can cause problems adding the drive to an array.
Rather than wipe it entirely, which is slow, and in the case of SSD causes wear, better to calculate what to delete at the start and end. How many sectors.
Better partitioning tools know to wipe the end of the drive when wiping a GPT table. They perform the required calculations automatically and just do it. They may or may not announce this does or will happen. With competent tools, creating a new, empty GPT table has the intended effect of such a wipe. -- Evolution as taught in public schools is, like religion, based on faith, not based on science. Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata
On 8/9/22 15:51, David C. Rankin wrote:
You will want to wipe the 15.2 drive (to remove any superblock or GPT partition entries) before adding that drive to the array. If I recall correctly, the wipe is needed with GPT because it writes entries to both the beginning and end of the drive which can cause problems adding the drive to an array.
My bad for not being clear, but the intent was just to get rid of the superblock / GPT table entries. "wipe" in the utility "wipe" was an unfortunate choice of words. To zap the GPT entries, gdisk has the "zap" option or you can do it manually with dd, similar to: start table entry # dd if=/dev/zero of=/dev/drive bs=512 count=34 end table entry # dd if=/dev/zero of=/dev/drive bs=512 count=34 seek=$(($(blockdev --getsz /dev/drive) - 34)) If repurposing a disk from an array, you can use mdadm to wipe the superblock: # mdadm --misc --zero-superblock /dev/drive -- David C. Rankin, J.D.,P.E.
On 2022-08-10 07:20, David C. Rankin wrote:
On 8/9/22 15:51, David C. Rankin wrote:
You will want to wipe the 15.2 drive (to remove any superblock or GPT partition entries) before adding that drive to the array. If I recall correctly, the wipe is needed with GPT because it writes entries to both the beginning and end of the drive which can cause problems adding the drive to an array.
My bad for not being clear, but the intent was just to get rid of the superblock / GPT table entries. "wipe" in the utility "wipe" was an unfortunate choice of words.
To zap the GPT entries, gdisk has the "zap" option or you can do it manually with dd, similar to:
start table entry
# dd if=/dev/zero of=/dev/drive bs=512 count=34
end table entry
# dd if=/dev/zero of=/dev/drive bs=512 count=34 seek=$(($(blockdev --getsz /dev/drive) - 34))
Thanks :-)
If repurposing a disk from an array, you can use mdadm to wipe the superblock:
# mdadm --misc --zero-superblock /dev/drive
-- Cheers / Saludos, Carlos E. R. (from 15.3 x86_64 at Telcontar)
participants (8)
-
Andrei Borzenkov
-
Carlos E. R.
-
David C. Rankin
-
David T-G
-
Felix Miata
-
jdd@dodin.org
-
Per Jessen
-
Peter Suetterlin