More anecdotal 11.0 stuff to add to the pile. 11.0 vs 10.3 rather. Trying to install a new batch of servers and having occasion to do a lot of install & re-install and some patterns develop that I couldn't have said for sure before. I'm installing on several copies of the same hardware config: Supermicro X7DBN motherboard (dual xeon, one quad core & 4G ecc ram installed) has 2 on board intel gigE nics, on board sata fakeraid/host-raid with both intel and adaptec code (selectable in bios), adaptec 3405 pcie card providing hardware raid to 12 sata drives (backplane acts as an expander, there is only one cable with one of those 4xSATA concentrator plugs on both ends going from the 3405 to the backplane, and that's the only hard drive cable or connection anywhere anywhere in the box. backplane has no other connection directly to the motherboard for example). Drives are 12x500G sata2 Using raid10 that comes out to 2.7 TB which is too big for a single partition in an msdos disklabel, which is all opensuse's installer can create, so rather than merely chopping it up into partitions I'm chopping it up in the raid card into multiple arrays, for multiple logical disks, 1x500G raid10 across all drives, 2x1TB raid10 across all drives. Which eventually I hope to use as one xen host (500G) and 2 xen guests (1TB each) per box, but for now I'm merely installing onto the 500g array and leaving the rest un-touched. So, /dev/sda is a simple 500g drive to the os, which I am formatting 1G /boot, 8G swap, rest /. Raid card uses aacraid driver which is present in both 10.3 and 11.0 stock kernels. Nics use e1000 driver also present in both stock kernels. No cd or floppy, these are funky 12 hot swap drive cases in only 2u, which means the front is 100% hard drive bays with not one square cm left over. The power button, reset pin, and power/nics/hd/temp/alarm lights are all squished into a tiny little membrane on one of the rack ears! Point is, install is via usb or pxe. In my case, pxe every time. pxe for booting and http for install source. The http install source is is an opensuse 10.2 x86_64 box which is also the dhcp server, tftp server, and nat router to a cable modem. Here's where the reproduceable patterns come in, re: 10.3 vs 11.0 --- Installing 10.3 x86_64 via pxe: --- * The nic card pxe client always succeeds at getting dhcp, but the the 10.3 installer always fails dhcp. I always have to manually enter ip settings, but given manual ip settings, the net connection always works fine the rest of the way. This may be some form of fluke interaction between this particular dhcp client and my Foundry EdgeIron 24G switches, because I've seen this before here at the office, yet certainly I've seen the 10.3 installers dhcp client work fine, not only elsewhere but here too when plugged into some other switch taking some other path to the same dhcp server, and other dhcp clients work fine here on the same switch(es). * Install goes perfectly normally and the machine installs and reboots and works perfect. --- Installing 11.0 x86_64 via pxe: --- * The nic card pxe client always succeeds getting dhcp, and the 11.0 installer dhcp client also always succeeds! Same client machine, same switch etc. Also other copies of the same client hardware (so it's not some odd effect of dhcpd remembering the mac, previously loaded arp tables in the switches or dhcp server etc) * Install goes normally, but for the life of me I can't get it to boot after install. Choosing all the same settings, outwardly anyways, the bootloader installer is doing things a little different on it's own wrt using disk id vs device name in menu.lst. Whether I keep as hands-off as possible and just let it do it's thing, or if I intervene and force the same generic-mbr & other settings as 10.3 created on it's own, all that happens at reboot is stage 1.5 loads and then one more line after that, "GRUB loading please wait...", hang forever at that point. No drive activity. Locked unresponsive box, not even ctrl-al-del. Must power cycle. I even tried installing 10.3, updating, and verifying I have a perfectly working 10.3 minimal text install that runs and specifically that it boots fine etc, updates all applied etc. Then Upgrading from that perfectly working state to 11.0 by doing: # zypper rr updates # zypper rr non-oss # zypper rr oss # zypper ar http://host/SUSE/11.0/oss oss # zypper ar http://host/SUSE/11.0/non-oss non-oss # zypper ar http://host/SUSE/11.0/updates updates # zypper ref # zypper in rpm # zypper in zypper # zypper dup * Gotta love zypper! Sweet! Upgrade from 10.3 to 11.0 goes perfectly cleanly, and all while still running live on 10.3!! Don't need to reboot to an installer, just reboot whenever you feel like after it's all over just to get onto the new kernel. Wow! Holy crap that's cool. I think I want to make up some t-shirts that say "zypper is hipper!" or something to that effect but less gay. ;) * I couldn't get xen working either. I've never used it before so that's not surprising, but, still, I'm just trying to use a yast menu choice like a dummy as a reference starting point before digging in and getting fancy. Using yast, I can install xen itself, and sucessfully boot into the xen kernel and run the same installation, now as a xen dom0. But going into yast again, selecting "create vm" it just hangs. Not the whole box just yast. Given that I had to answer "yes install gui tools anyway" while installing xen because I am running in text mode, possibly one of the components that yast is trying to run not only requires, but also simply assumes, that I have an X display? This was just in 11.0 and xen 3.x after all online updates. I haven't tried in 10.3 yet. * 11.0 x86_64 installed without incident on several other machines of a different type. Those have a different mobo and only 8 sata drives on an adaptec 3805, using the direct attach kind of sata cable where the card-end has 2 of the 4-sata concentrator plugs and the drive-end is individual sata cables & plugs, two of those per card, for 8 drives. Point being, the backplane is really just a glorified hotswap cage connector and not really a backplane per se, and definitely not an expander like in the other boxes. Raid is smaller with only 8 x 320G drives, configured as one 1.6TB raid10 array, partitioned as 1G /boot, 8G swap, rest /. Install -> boot -> run -> updates -> boot -> run. No problems anywhere along the way. * Installing 11.0 via serial console results in the display setting itself to ony 24 lines, even if 25 or more are really available. With 25 lines, all the text mode screens in yast work. With 24 lines, they _look_ like they are working fine, but really you are missing tabs (well, they would be tabs in the gui installer, they are just [buttons] in the text/dialog interface) across the top in some screens, but yast (or perhaps dialog?) must be intelligently adjusting itself such that you don't see the bottom edge of the tabs and unless you already knew better from seeing the same screen on a real console or ssh, you'd never know there were whole screens that you can't get to by any means. In other cases there are alternate means to navigate via the new F-keys, but on a serial console sometimes it's difficult to get perfect linux terminal emulation such that F9, F10 etc all work. Because you may need the serial console to be configured to cater to the motherboard bios's idea of "pc-ansi" or "vt100" and you may NEED that emulation to be perfect including all F-keys. Assumimh that for whatever reason you can't use the ssh option until later after install, It's still possible to complete an install and get back into the installed os, still via serial, and unlike the installer, you have a chance to run "export LINES=25; readonly LINES" before running yast, and then yast works fully for the duration of that session, wherein you can get at those remaining screens. Also in that case you have a chance to arrange better terminal emulation because unlike the installer, now you have the full termcap and terminfo libraries installed to choose a better matching definition, and/or create/install a better matching definition and then use it. * The boot-installed-system option from a 10.3 installer works fine to boot one of these non-booting but not-otherwise-broken 11.0 installs, at least as long as the kernel had the minimum required hd/fs/nic drivers in the initrd or built-in. With NO /lib/modules/* matching the running kernel.
From reading the linuxrc docs it looks like the kexec option might even allow you to actually boot the target installations kernel instead of merely mount it's fs, but I haven't tried that yet.
-- Brian K. White brian@aljex.com http://www.myspace.com/KEYofR +++++[>+++[>+++++>+++++++<<-]<-]>>+.>.+++++.+++++++.-.[>+<---]>++. filePro BBx Linux SCO FreeBSD #callahans Satriani Filk! -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org