Hi, Could use some help in pebbling one of my machines back to live. This is openSUSE 13.1 on x86_64. The machine also functions as my print server and this is where the spiral started. This morning I wanted to print something, from another machine, and noticed that the printer had disappeared. After trying to reconfigure it and restarting cups on the print server didn't work I hoped for the magical "lets reboot the box" trick. After all, what could possibly go wrong. Well, pretty much everything went wrong from there. Issuing "shutdown -r now" put he box in limbo and I ended up having to push the power button. Initially after turning it back on it wouldn't boot at all getting stuck when trying to deal with the external usb drive attached. The external device is not part of any VG setup. Turned off the external device and commented out the entries that point to the volume groups in /etc/fstab. Keeping the fstab entries brought me to emergency boot mode in systemd, but the keyboard wouldn't work. Thus, this was entirely useless (filed a bug). I can get the system to boot but everything with the LVM setup is messed up. For example lsblk only shows the drives: # lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT fd0 2:0 1 4K 0 disk sda 8:0 0 55.9G 0 disk ├─sda1 8:1 0 1G 0 part /boot ├─sda2 8:2 0 12G 0 part [SWAP] └─sda3 8:3 0 42.9G 0 part / sdb 8:16 0 931.5G 0 disk └─sdb1 8:17 0 931.5G 0 part sdc 8:32 0 465.8G 0 disk └─sdc1 8:33 0 465.8G 0 part sdd 8:48 0 931.5G 0 disk └─sdd1 8:49 0 931.5G 0 part sr0 11:0 1 1024M 0 rom But when I boot into a rescue system on SLES DVD and run lsblk I can see the LVM setup in the output For example: sdb |_sdb1 |-extend_VG-home (dm-1) Typing the whole tree is a bit cumbersome. Running lvdisplay when running on the installed system produces sensible output, i.e. what I would expect, except for: # lvdisplay No device found for PV mM4Hhg-o52o-pj0p-fd7v-ArzT-gilf-ZrVdTh. The rest of the output looks OK to me. I have no idea how I would map the cryptic name to an actual device. The systemlog contains timeout messages for 2 of the 3 drives: systemd-udevd[310]: worker [325] /devices/pci0000:00/0000:00:1f.2/ata4/host3/target3:0:1/3:0:1:0/block/sdd/sdd1 timeout; kill it systemd-udevd[310]: worker [327] /devices/pci0000:00/0000:00:1f.2/ata4/host3/target3:0:0/3:0:0:0/block/sdc/sdc1 timeout; kill it In the rescue system I can set up a chroot environment and then mount the volumes and everything is fine. As usual, I would rather not go the reinstall route, but am out of ideas at this point. I could use some help. I ran zypper up right before I tried to shutdown the machine, thus all the latest updates are installed. Thanks, Robert -- Robert Schweikert MAY THE SOURCE BE WITH YOU SUSE-IBM Software Integration Center LINUX Tech Lead Public Cloud Architect rjschwei@suse.com rschweik@ca.ibm.com 781-464-8147 -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org