[opensuse-support] NVMe drive setup on system w/o NVMe BIOS support
So I've got myself a bunch of NVME adapters and drives and plopped them into my Haswell systems to make some good use of the empty PEG slot. All of them don't have NVMe support in the BIOS and also do not offer PCIe bifurcation so it'll probably be a single NVMe running in PCIe-x4 mode forever. If it's somehow possible to enable bifurcation without proper BIOS support I'd like to know. I believe the PCIe bus goes directly to the slot without any switches that might need to be controlled on the one computer that I might want to put another NVMe in (a Dell T20). The drives were recognized immediately as /dev/nvme0n1 and I've already installed nvme-cli and ran mkinitrd so the boot system will have it also (even though it'll probably not need it). The first system was plain partitions and I've wanted to convert to LVM, so I created a new VG on the naked device (see my question about that further down) and set up new volumes for swap, sysroot and home. Then I moved home with xfscopy, re-created and switched the swap and lastly ran btrfs replace to move it from the partition to the LVM, then edited fstab and ran mkinitrd. It was only then that I realized that /boot was not in fact on the EFI system partition, but a subvolume of the sysroot btrfs. OK, the grub entries looked halfway sensible so I decided to see what would happen, rebooted and of course it didn't find the boot partition because Grub doesn't see anything the BIOS doesn't see. So I booted into a rescue system, moved the btrfs back to where it was before and then --8<---------------cut here---------------start------------->8--- mount /dev/sda3 /mnt mount --bind /dev /mnt/dev mount --bind /proc /mnt/proc mount --bind /sys /mnt/sys mount --bind /var /mnt/var chroot /mnt mkinitrd --8<---------------cut here---------------end--------------->8--- I don't really want or need to do that again, but it was less painful than I remember it from years before… but the question remains if there's an easy way to get the part that is needed to get NVMe and the LVM on it recognized spliced out onto the SATA (which I would like to replace with some older / smaller one later) but keep the rest of sysroot on NVMe. As far as I understand Clover mod it can only boot NVMe disks with a GPT and maybe only Windows, but I've yet to dig deeper. But I gather that there might be a way to provide EFI with some sort of driver or chain loader that sets up the NVMe recognition and Grub could take over from there? The other system (the T20) is already LVM based, which I want to leverage for the move from the SATA to the NVMe drive. I've looked around a bit and there are multiple ways of doing that and I'd like to know if one is clearly preferrable over the other(s). The first part concerns the creation of the new physical volume. All the howtos I've read so far talk about partitioning the disk and then using one of the partitions to create the volume. I think that isn't necessary and I could use the whole device (unpartitioned) when I can't boot from the disk and the partition would use the full space available anyway. My experience with the second system indicate that this should work OK, but maybe there is a disadvantage of doing that I don't know about? The second part concerns the actual move of the logical volumes to the new drive once it's been added to the volume group, I want at least the swap and the home LV to reside there, but leave the root VG on the SATA SSD (as long as I haven't figured out on the other system how to get the boot system separated out). The emptied space will get used for backups and less frequently used data, in a new LV. The LV move could be effected by pvmove, but I've found several howtos for doing it via setting up a mirror spanning the two devices and then breaking the mirror and leaving only the new device in the LV. One was alluding to that being somehow "better" or even "safer" than using pvmove. What are the actual downsides of using pvmove? The NVME is larger than the SATA, so I can make backup copies in the extra space (and on externalö disk) before the "hot" move. Regards, Achim. -- +<[Q+ Matrix-12 WAVE#46+305 Neuron microQkb Andromeda XTk Blofeld]>+ SD adaptation for Waldorf rackAttack V1.04R1: http://Synth.Stromeko.net/Downloads.html#WaldorfSDada
19.11.2020 22:13, Achim Gratz пишет:
I don't really want or need to do that again, but it was less painful than I remember it from years before… but the question remains if there's an easy way to get the part that is needed to get NVMe and the LVM on it recognized spliced out onto the SATA (which I would like to replace with some older / smaller one later) but keep the rest of sysroot on NVMe. As far as I understand Clover mod it can only boot NVMe disks with a GPT and maybe only Windows, but I've yet to dig deeper. But I gather that there might be a way to provide EFI with some sort of driver or chain loader that sets up the NVMe recognition and Grub could take over from there?
If ESP is on disk recognized by your firmware, you can always load drivers from ESP enabling access to other devices. NVME driver is part of edk2 (which Clover is based on), Clover build also contains it or you could compile edk2 directly. That said, at least one user describes that loading NVME drivers as part of boot hung system and he had to resort to using startup.nsh and two step (first load, then connect driver). https://rustedowl.livejournal.com/58627.html - page is in Russian, but commands are in ASCII :)
The first part concerns the creation of the new physical volume. All the howtos I've read so far talk about partitioning the disk and then using one of the partitions to create the volume. I think that isn't necessary and I could use the whole device (unpartitioned) when I can't boot from the disk and the partition would use the full space available anyway. My experience with the second system indicate that this should work OK, but maybe there is a disadvantage of doing that I don't know about?
As long as you do not dual boot, using whole device is probably OK. Using partitions marks disk as "in use", otherwise it is possible to accidentally overwrite this disk in some other OS that does not understand LVM signature.
The second part concerns the actual move of the logical volumes to the new drive once it's been added to the volume group, I want at least the swap and the home LV to reside there, but leave the root VG on the SATA SSD (as long as I haven't figured out on the other system how to get the boot system separated out). The emptied space will get used for backups and less frequently used data, in a new LV. The LV move could be effected by pvmove, but I've found several howtos for doing it via setting up a mirror spanning the two devices and then breaking the mirror and leaving only the new device in the LV. One was alluding to that being somehow "better" or even "safer" than using pvmove. What are the actual downsides of using pvmove? The NVME is larger than the SATA, so I can make backup copies in the extra space (and on externalö disk) before the "hot" move.
I guess rationale behind is, mirroring leaves you with full intact copy for the whole duration, so if anything happens (or you will cancel operation) you still have usable source volume. Note that in both cases both PVs must belong to the same VG. Which means, if /boot is located on LVM, grub may need access to both devices (not sure if it will access partial VG). This returns us to the need of NVME UEFI driver. Or you need to make SATA and NVME part of different VG (i.e. split VG after moving data).
participants (2)
-
Achim Gratz
-
Andrei Borzenkov