On 02/20/2015 12:07 PM, h15234@mailas.com wrote:
I'm working on an opensuse 13.2 machine running systemd v210.
Which version of Grub? Of LVM?
It's disks are all on RAID.
/boot is on RAID1 on /dev/md126
That seems strange. It doesn't seem like a LVM mapper address. I'm a great supporter of LVM but I do not put /boot on LVM ever. I know I can, but its too much hassle when things go wrong.
The remaining partitions are on LVM-on-RAID10
The LVs are
LV_ROOT VG0 -wi-ao--- 20.00g LV_SWAP VG0 -wi-ao--- 8.00g LV_HOME VG0 -wi-ao--- 100.00g LV_VAR VG0 -wi-ao--- 1.00g
The system fails to boot, dropping to a maintenance mode prompt.
Simply hitting Ctrl-D to continue, finishes booting the system.
Is this an "every time" occurrence or intermittent? Does it only occur on 'cold boots' when the disks are being spun up or also on reboots when the disks are already spinning? IIR reading that with LVM there is some unit that causes delay so that LVM. It sounds like either this simply isn't being done or isn't working. Your 'manual' start works because the human level 'reflexes' have given the disk subsystem time to come up and be operational. One "solution" is to edit the GRUB boot command line and add bootdelay=10 and possibly lvmwait=/dev/VG0/LV_HOME or whatever is appropriate. Ten seconds may seem to much, but see if it works and then reduce it step by step. If this turns out to be useful you can add them to system permanently by editing /etc/default/grub If this is an 'everytime' problem I'd first make sure the device mapper module is in the initrd of your boot. The device mapper is a low level volume manager. Higher level volume managers such as LVM2 use this driver. If the device mapper isn't part of the boot kernel it has to be loaded as a module from the ROOT FS, which isn't available yet. *IF* this is not a disk drive timing problem then its a problem with LVM activation. Somewhere along the line the equivalent of the command # lvm vgchange -ay isn't being done or isn't being done early enough or there isn't the 'wait' around it. That part *may* be systemd unit related. Not systemd related, but related to the unit that specifies startup of the disk subsystem. Since I don't run LVM on RAID I can't tell you where to go for that. Worse, it may not be static file by one produced by a unit generator. It may also be related to a udev rule since the rules there pertain to the startup of both LVM and RAID. Once agian, I don't run RAID so I can't be specific. However I'd look at /etc/sysconfig/dmraid as well for timeout values -- A: Yes. > Q: Are you sure? >> A: Because it reverses the logical flow of conversation. >>> Q: Why is top posting frowned upon? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org