Greg Freemyer wrote:
however, if your addition of disks has changed the "sd" ordering, it won't boot from the correct disk and your initrd won't be loaded either. This is slightly off-topic: Linda, I admit that is a struggle because I reconfigure computer drives all the time. Each bios seems to have it's own way to break the boot process as drives,
On Thu, Nov 15, 2012 at 12:21 PM, Linda Walsh <suse@tlinx.org> wrote: thumbs, USB DVD drives are added and removed. I bought a nice Intel MB a couple weeks ago. It was the first one I have seen that had a "static" option.
It's mostly Windows that is guilty of re-ordering drives in my experience. I can add/remove drives -- the main floppy, USB drives, Add/remove sata drives.. Doesn't affect my linux device names that I boot from on a Dell system. But it's *not* automatic. If I add new HW, I go into the BIOS and make sure it's still in the boot order I want. The only time it changes is if a drive before my boot drive is take out/removed, then the bios moves the drives "down". My 'a+b+ drives are SAS-based raid arrays, which I tell the bios to mount first. But I boot off of drive C, the same as when I had 2 floppies there. I can add/remove CD's, as they aren't disks, they don't change anything, if I add a SATA disk -- still doesn't matter -- as I tell my BIOS to boot off-board (meaning in a slot, vs. soldered), cards first and I tell it the order to be the card in slot 0 vs. slot 5). Barring me taking drives (there are 2 RAID-based drives attached to the card in slot 0). Another reason I have "smallish" system drives .. I use 15K SAS drives which are smaller. To get the space I have & optimize speed, I 'short-stroke' the array, using only the first half of 3-72GB drives, with 1 being parity. Effectively I only have 72GB for all of '/', '/usr', 'var, var/cache, swap, boot. My larger storage disks are everything else -- all 2TB 7200 SATA's hooked up to a RAID SAS controller, split between "sda" and "sdb". sdb for backups is a RAID6 6 data disks. sda is my main work disk -- which is split into volumes using lvm -- seek times aren't as fast as on the root HD, but linear speeds are good -- it's configured as a RAID50 using 3 groups of 5. The 1 disk left over is a global spare. I don't even have room for /usr/share on the root drives anymore -- it grew too big -- so my /usr/share lives on /home/share, which is then mounted via bind to /usr/share. One thing that doesn't get advertised much, is that /usr/share -- specifically meant to be non-arch-specific, shared-content, is ALSO being required now in order to boot. So it's not just /usr, but /usr/share as well (and likely others will be added over time). But as an example, my /usr/share is in /home/share, that means /home needs to be mounted as well -- all of this worked fine under 12.1. Even with /usr mounted, the newest stuff in factory won't necessarily boot -- as /usr/share wasn't mounted. Now maybe that's because /home (on an lvm based device) wasn't mounted, or maybe they didn't also mount / process the 'binds' and 'rbinds', or maybe they didn't do it in the right order: 1. / (root) 2. /usr 3. /home 4. /usr/share (from /home/share). I don't know where it failed, as I had to mount /usr/ manually and call boot & init scripts after that. Note -- my boot failed because of a new, added, "screw-you" check (one that catches a compliance error but serves no useful purpose). Someone put in a superfluous and ill-considered check to make sure your root disk is read-only upon booting, because they "know"[sic], that you can't run a file system's "fsck" script on a writable disk. But that's just stupid -- for 2 reasons: 1) if fsck can't run due to a disk being writable, then it will die with an error saying it can't do the check -- so the first check is redundant. 2) if fsck CAN run with a disk being writable, then the extra check just screws you over -- for no technical reason. Again, this is an example of changes going in with no thought behind them (or, worse, with deliberate malicious intent). I'm presuming that they simply didn't use their brain and didn't think the logic through -- which given the choices -- is actually a positive assessment. I also had the mount for /dev/hugetbls fail -- because it's mount point had not been created on /dev. Why? because udev exited early -- because /usr/share/somthing wasn't available to process some other device! As near as I can tell, some very seriously screwed circular dependencies are in the making that will make bringing up a system from any sort of 'problem state', near impossible -- that is the "benefit" of moving to a separate /usr and forcing initrd. In my case, udev needed to create devices, but exited / died when it couldn't access something in /usr/share -- that wasn't there because /dev wasn't populated to allow mount of /home/share. It's bad enough when one can't mount /usr because mount is on /usr, but if you can't get udev running, and that forces /usr/share to not be mountable (which udev needs), OpenSuse has created a maintenance nightmare. The idea of spreading dependencies out across all of the disks is going to lead to more circular logic problems (just like the need to mount /usr one needs 'mount', but some rocket scientist put that in /usr/bin. Of course the idea of mounting is 'moot' anyway, since a reason given for needing initrd was to check disks if they needed it (fsck)... he pointed to xfs_repair (usually only run after xfs_check, in my experience) -- neither of which is even on initrd, let alone called as part of any boot process. So if /usr doesn't mount -- you can't check it anyway, so the idea that you need initrd to precheck the disks is also moot. Someone else suggested that you needed to decrypt disks -- but if that was true, how can you read initrd or the boot image if they are encrypted? Isn't it true that those routines need to be already built into your kernel if you want to use them? If you want an encrypted root disk, then you can preboot from a ram disk(initrd), but to decrypt it requires the decryption be built into the kernel -- if you do that, you also need the ram disks and loopback support builtin. I would be surprised if the linux kernel didn't have a way of booting with an encrypted root, w/o an initrd. Do you see how many problems just mounting /usr in initrd can be. It's known to be unreliable (I've had failures with unbootable systems in the past, and factory required a rescue disk to boot all of last week to get around initrd bugs). It's NOT just 1 bad thing .. it's several at this point and the number will continue to climb as people insist on going down this poorly thought out path. Let Red Hat commit Hari Kari alone -- it will only help those who don't follow the head lemming. If they finish their work and still have customers... and it shows any benefit -- maybe THEN consider moving in that direction, but as it stands now, you can get the same benefits without kneecapping yourself. -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org