[opensuse] Re: how do I boot to a text console in 12.2?
Felix Miata wrote:
2-Kernel/initrd changes require "running" lilo before a reboot can use them.
This is because lilo a lilo booted kernel doesn't need to know the FS format before booting --- it reads the boot code from the disk sectors. This is the most efficient type of boot though it can have problems. I always run a lilo as part of my install new kernel script, so it was never an issue. Grub needs to know how to read a file system -- and how to read those file sectors off disk. It screwed that up so badly, that Suse stopped supporting XFS because grub doing direct disk read/writes on a live file system wasn' supported. I was told that was fixed, but XFS support at SuSE never recovered to where it was. Now SuSE doesn't support TEXT MODE booting, because it's not in GRUB? HEY LYNN, it still works in LILO! I remember that breaking in Grub when I first tried switching to grub because grub uses a graphical boot instead of BIOS VGA chip modes...
3-Grub shell allows to manually boot from the Grub shell, crucially handy if menu.lst is missing or otherwise problematic.
Can you use grub to boot directly from the hard disk, or do you need a ram disk to pre-boot from in order to really boot from your hard disk?
4-Interactive editing of initrd & kernel cmdline. 5-Chainloads NTLDR, IBM BM (at least in theory), other Grubs, & even Lilo.
--- lilo allows both of those
6-Easily configurable & installable even though booted to another OS or media. 7-Needs no files in /etc to be able to function or configure.
--- You mean the "System Config dir?"... why would having config dirs in the system config dir be a bad thing? And so far, I've never had problems with video... well... sorta scratch... when I boot from memory (kexec), but that doesn't count -- it's not using lilo then.. Hey, I would love to use grub if it worked...but last time I tried it -- it turned off the video during boot and I couldn't see the bootup.... That's probably what the OP wants too!... grub isn't giving her one of the builtin VGA or EVGA console modes.... -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Tue, 07 Aug 2012 17:41:15 -0700, Linda Walsh <suse@tlinx.org> wrote:
It screwed that up so badly, that Suse stopped supporting XFS because grub doing direct disk read/writes on a live file system wasn' supported.
When should that have been? And if that ever happened, it must have been looong ago because AFAIR we always supported XFS.
Now SuSE doesn't support TEXT MODE booting, because it's not in GRUB?
That's plain BS. Grub has nothing to do with the boot mode.
Hey, I would love to use grub if it worked...but last time I tried it -- it turned off the video during boot and I couldn't see the bootup....
Again, *which* version of grub? Grub 2 needs a different approach to have a graphical boot screen, but that has *nothing* to do with the mode the kernel boots in. Philipp -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 2012/08/07 17:41 (GMT-0700) Linda Walsh composed:
Felix Miata wrote:
Now SuSE doesn't support TEXT MODE booting, because it's not in GRUB?
...
I remember that breaking in Grub when I first tried switching to grub because grub uses a graphical boot instead of BIOS VGA chip modes...
Is this is referring to GFXboot? You don't have to use it if you don't like it. Just uninstall it. And, you can ESC from it any time if plain text you want. If you're referring to what happens after making your Grub menu choice, it's all about what's on cmdline. In openSUSE I keep splashy and bootsplash uninstalled and locked out, plus 3, splash=verbose and vga=### and/or video=####X### on most of my cmdlines to ensure against accidental appearances of anything except text until after init has completed.
3-Grub shell allows to manually boot from the Grub shell, crucially handy if menu.lst is missing or otherwise problematic.
Can you use grub to boot directly from the hard disk, or do you need a ram disk to pre-boot from in order to really boot from your hard disk?
AFAIK I have no RAM disks involved with normal booting of any of my systems, unless an installation script is sneaking a temporary one into an initrd.
4-Interactive editing of initrd& kernel cmdline. 5-Chainloads NTLDR, IBM BM (at least in theory), other Grubs,& even Lilo.
lilo allows both of those
I don't remember ever typing help at a Lilo prompt and getting anything resembling a helpful response. Also I don't think Lilo allows the more elementary rendition of #4. Grub Legacy needs no menu. Once "installed" somewhere the MBR code can find, one can boot directly to a Grub prompt, from which one can locate stage1s, device.maps, kernels and initrds among other things, then boot by typing Grub commands in the same manner one uses common command shells like bash.
6-Easily configurable& installable even though booted to another OS or media. 7-Needs no files in /etc to be able to function or configure.
You mean the "System Config dir?"... why would having config dirs in the system config dir be a bad thing?
Only an overly complex bootloader like Grub2 needs to scatter the files required to make it work or get it configured to work in multiple directory trees. I can copy a tiny bundle of Grub Legacy files into one grub directory, "install" it to a bootsector by typing as little as two short lines, then initialize an operating system boot using without doing anything more before seeing a Grub prompt or menu.
And so far, I've never had problems with video... well... sorta scratch... when I boot from memory (kexec), but that doesn't count -- it's not using lilo then..
Hey, I would love to use grub if it worked...but last time I tried it -- it turned off the video during boot and I couldn't see the bootup....
Grub Legacy couldn't have been responsible for that. Something put something on cmdline, or something got left out of or misconfigured in the kernel or initrd to cause that. Grub loads a kernel, and usually an initrd, plus feeds some parameters to the kernel, same as Lilo does. What the kernel does with what it's given isn't something Grub or Lilo have any control over, not Grub Legacy anyway. Grub is another story. 2.0 is barely out, and should be called something else 1.0, as it was for all practical purposes a complete redesign, akin to the change from KDE3 to KDE4. Bugs and incomplete documentation are to be expected from anything so immature. -- "The wise are known for their understanding, and pleasant words are persuasive." Proverbs 16:21 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 2012-08-08 02:41, Linda Walsh wrote:
Grub needs to know how to read a file system -- and how to read those file sectors off disk. It screwed that up so badly, that Suse stopped supporting XFS because grub doing direct disk read/writes on a live file system wasn' supported.
I was told that was fixed, but XFS support at SuSE never recovered to where it was.
This is simply FALSE. openSUSE has always supported XFS. What was broken was installing grub to an XFS partition, which is very different than what you said. And it was not the fault of openSUSE, it was an upstream issue that has been solved. - -- Cheers / Saludos, Carlos E. R. (from 12.1 "Asparagus" GM (bombadillo)) -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.18 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAlAi6p4ACgkQU92UU+smfQX8bwCfRkKoeyHEwR4LeSREjvFjcBR3 dvQAni94ofbG6XB3+jQK0m/tk4DgEKng =KCBo -----END PGP SIGNATURE----- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Carlos E. R. wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 2012-08-08 02:41, Linda Walsh wrote:
Grub needs to know how to read a file system -- and how to read those file sectors off disk. It screwed that up so badly, that Suse stopped supporting XFS because grub doing direct disk read/writes on a live file system wasn' supported.
I was told that was fixed, but XFS support at SuSE never recovered to where it was.
This is simply FALSE.
openSUSE has always supported XFS.
What was broken was installing grub to an XFS partition, which is very different than what you said. And it was not the fault of openSUSE, it was an upstream issue that has been solved.
To be clear... What wasn't supported is having an XFS-only based system -- because GRUB was the only loaded supported at one pointed. So the implication if you can't install grub->XFS is you can't install an XFS-only based system. Second...as far as fault -- it was SuSE's fault because they chose to use grub over LILO in that release, which did not have the problem. They specifically dropped support for Lilo in that release, -- which I guess has been rescinded, but lilo is still ill supported. references: http://forums.opensuse.org/english/get-technical-help-here/install-boot-logi... http://opensuse.14.n6.nabble.com/grub-no-longer-being-maintained-drops-suppo... http://www.mentby.com/Group/opensuse/grub-no-longer-being-maintained-so-suse... -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Wed, 15 Aug 2012 10:58:48 -0700, Linda Walsh <suse@tlinx.org> wrote:
To be clear... What wasn't supported is having an XFS-only based system -- because GRUB was the only loaded supported at one pointed.
Yes, like booting from software raid or booting from lvm, both still requiring a separate boot partition.
So the implication if you can't install grub->XFS is you can't install an XFS-only based system.
So can't also install a lvm only system or a sw RAID one. So what?
Second...as far as fault -- it was SuSE's fault because they chose to use grub over LILO in that release, which did not have the problem.
LILO has other problems, the least one being that you have to rerun LILO after a kernel update. Philipp -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Philipp Thomas wrote:
On Wed, 15 Aug 2012 10:58:48 -0700, Linda Walsh <suse@tlinx.org> wrote:
To be clear... What wasn't supported is having an XFS-only based system -- because GRUB was the only loaded supported at one pointed.
Yes, like booting from software raid or booting from lvm, both still requiring a separate boot partition.
Um... are you trying you boot from a separate partition to a RAID or LVM based system and no file system is involved after boot? Cuz as near as I can tell, you are trolling. You need a file system to boot from. That's always going to be the case. XFS is a file system -- not a volume manager. but if you want an lvm+SW raid system You could do it with LILO if your SW RAID is on top of lvm (not sure about linux-SW RAID... as it would have to have enough contiguous space on 1 device to read in the kernel. I do know it works with a BIOS-SW RAID as the first device though... If you want to go the boot from lvm route, just create as many partitions at the front of your disk with lvm as you want alternate boots from, and LILO could boot from them. I strongly doubt Grub/Grub2 could -- then partition the rest of your disk as SW RAID. Maybe 32M partitions x 10 320MB at the beginning of each disk only used on volumne0 -- to load the kernel from -- would probably still need a RAMDISK to load udev and setup a dev device -- that could be stored in a 2nd partition -- in each case, lilo could map the HW sectors to the boot map and work. So is that really what you want? Doesn't sound very flexibile. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Wed, 15 Aug 2012 21:39:08 -0700, Linda Walsh <suse@tlinx.org> wrote:
Um... are you trying you boot from a separate partition to a RAID or LVM based system and no file system is involved after boot?
Of cause a file system is involved after boot. On my work system it's lvm for the the disk management and xfs as file system for all partitions.
Cuz as near as I can tell, you are trolling.
Didn't you notice that I wrote "sw raid OR lvm"? nowhere did I say raid and lvm.
You need a file system to boot from.
Of cause you do and I never denied it. But booting from lvm requires a separate boot partition as does booting from from a sw raid (be it pure sw or a fake raid (aka BIOS which also uses sw raid, just the setup is done by bios) other than RAID 1 .i.e. mirroring, independent of the boot loader you use.
You could do it with LILO if your SW RAID is on top of lvm (not sure about linux-SW RAID... as it would have to have enough contiguous space on 1 device to read in the kernel. I do know it works with a BIOS-SW RAID as the first device though...
With anything besides RAID 1 it does not work reliably butr only by chance.
create as many partitions at the front of your disk with lvm as you want alternate boots from, and LILO could boot from them.
That would be a too fragile setup and no reason for me to use LILO. I use grub and have come to like the added flexibility grub gives me and I have no problem with an extra ext2 boot partition. Philipp -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Philipp Thomas wrote:
On Wed, 15 Aug 2012 21:39:08 -0700, Linda Walsh <suse@tlinx.org> wrote:
Um... are you trying you boot from a separate partition to a RAID or LVM based system and no file system is involved after boot?
Of cause a file system is involved after boot. On my work system it's lvm for the the disk management and xfs as file system for all partitions.
Cuz as near as I can tell, you are trolling.
Didn't you notice that I wrote "sw raid OR lvm"? nowhere did I say raid and lvm.
Oh -- you meant linux-based sw-raid. SW-raids are included in many Dell BIOS's, no-hardware is involved -- just SW -- and they work fine to boot off of. Of course, if your driver to run the RAID is IN a RAID partition that is only supported by the driver, you have a catch-22. But the same isn't true for booting with dm and lilo, since lilo just uses 'disk space' independent of a file-system. So it could boot from a dm-only system (no file system), which is what you seemed to think wasn't supported in lilo. So IF you have a file system like ext2 -- you can have an ext2-only based system, if you want, isn't that true? In fact, are their any file systems supported as 'linux file systems' (i.e. they have to be able to hold user/group permissions all the standard stuff). That you can't boot from and use exclusively as your file system? LVM and linux SW RAID are both virtual disk->physical-disk mapping programs. LVM can do RAID 0 and 1 but it's prime purpose was to manage 'volumes' as heterogeneous collections of physical devices that can be broken into arbitrary size, whereas LSRaid takes multiple homogeneous devices and maps them into 1 virtual disk (that is mirrored or interleaved, or whatever, depending on what RAID level is used). They are optimized for different use cases, but are still both ways of combining multiple physical devices into 'featured' virtual volumes. Lilo might have problems booting directly from a DM LV, if the LV wasn't all on the same device. Could grub boot without a file system? you can answer that you don't need that -- but you say you are going to use a file system. There is no requirement to use a separate, 'limited feature' file system to boot from in lilo, whereas at one point, grub wasn't able to support a boot & root on an advanced file system. It's rather like telling me that I need a FAT32 filesystem to boot from then I can boot to my real-file system. Doesn't that sound a bit ridiculous and unreasonable? How is that different in requiring ext2 in order to run an xfs-only system? My point (as indirectly as I expressed it) was that linux has no restrictions about what file system it boots from. Lilo supports that feature. Grub, at one point, did not (don't know about now). The fact that Grub didn't support the full linux file-system model should have been enough reason, alone to keep lilo an equally supported choice as grub, as it provides features that Grub does no (and note -- I am not suggesting it be an exclusive choice, but as near as I can tell, it can support booting from anything Grub can, but the reverse was (and may still be) NOT true. Saying you can't boot from a from a volume manager that needs OS support to properly emulate the virtual disk, is a reasonable constraint. But file systems have no such abilities -- they can only run in one partition (though it can be physically split -- that's done outside of the file system by other SW). The only requirement is I am asserting lilo needs is to have the kernel image on the same disk (NOTE: it may very well be able to boot from a RAID mapped device, but I haven't researched that, so I'm not claiming that). With or without a file system, lilo can start a kernel from a single disk image. However, Grub needs 'fullish' file system support, as it provides pre-boot access to the file systems - so it has higher requirements. To use something that is limited at the start should have been sufficient to keep a full-support bootloader in the primary-boot-loader group. It wasn't. As a result, XFS was tossed aside -- and many people still don't boot from XFS even though they use XFS for their other partitions. That's a black eye for Grub, that will only be recovered from when people stop using a separate filesystem for what would otherwise be an XFS-only system. But I *think* it is history now, and if I understand people correctly, XFS is fully supported as a boot file system (is that correct?)... so at this point it's only a historical event that can be learned from for those that remain open to learning. To those that don't, any mention of the topic, no doubt sounds like a grating noise. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
* Linda Walsh (suse@tlinx.org) [20120823 03:26]:
Oh -- you meant linux-based sw-raid. SW-raids are included in many Dell BIOS's, no-hardware is involved -- just SW -- and they work fine to boot off of.
That's what is normally called fakeRAID as the BIOS only sets up the RAID drives and everything else is done by software. And it *only* works reliably with Grub when doing RAID1 i.e. mirroring. Grub can't boot reliably from striping sw-raids as it has no driver for that. Philipp -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Philipp Thomas wrote:
* Linda Walsh (suse@tlinx.org) [20120823 03:26]:
Oh -- you meant linux-based sw-raid. SW-raids are included in many Dell BIOS's, no-hardware is involved -- just SW -- and they work fine to boot off of.
That's what is normally called fakeRAID as the BIOS only sets up the RAID drives and everything else is done by software. And it *only* works reliably with Grub when doing RAID1 i.e. mirroring. Grub can't boot reliably from striping sw-raids as it has no driver for that.
Just for clarity, there are two different RAID implementations to consider here. "linux-based sw-raid" would normally be taken to mean RAID implemented by linux and using the mdadm program and friends, with device names like /dev/md*. BIOS-based fakeraid is different. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Dave Howorth wrote:
Philipp Thomas wrote:
* Linda Walsh (suse@tlinx.org) [20120823 03:26]:
Oh -- you meant linux-based sw-raid. SW-raids are included in many Dell BIOS's, no-hardware is involved -- just SW -- and they work fine to boot off of. That's what is normally called fakeRAID as the BIOS only sets up the RAID drives and everything else is done by software. And it *only* works reliably with Grub when doing RAID1 i.e. mirroring. Grub can't boot reliably from striping sw-raids as it has no driver for that.
Holy poop, you are kidding? Lilo has no special support for booting from them either -- it sees them as 1 single SDA1 device as presented by the BIOS. Windows boots from them fine as well... I find it hard to believe grub woudln't. It looks like 1 HD to the OS -- Vs. on my HW based RAID that's also booted from the ROM (different sys), the OS (linux) CAN tell there are underlying disks after it boots up. I'd love to talk about what crap GRUB is for not being able to boot on something so common, but I'm 90% certain it does. It IS a type of LSI based RAID -- just no HW support, but i'd guess LSI put in enough HW to at least fool an OS to make it look like 1 disk -- only supports RAID0,1 or 0+1 (or is that 1+0?)... But it's their default disk device if you don't buy a raid card.
Just for clarity, there are two different RAID implementations to consider here.
"linux-based sw-raid" would normally be taken to mean RAID implemented by linux and using the mdadm program and friends, with device names like /dev/md*.
BIOS-based fakeraid is different.
--- Nothing fake about it for what it does. It interleaves 4 150MB/s SAS drives to yield up to 600MB/s just fine -- it doesn't do RAID5 or 6 like HW RAID cards do, but for RAID1/RAID0, SW can do that fairly easily and efficiently. I'm more likely to think of SW RAID5 as 'fakeraid', given it's performance, but that's just some sorta 'put-down' or 'one-upmanship that I wouldn't even think of if you hadn't used the term. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Sat, 25 Aug 2012 03:50:33 -0700, Linda Walsh <suse@tlinx.org> wrote:
Holy poop, you are kidding? Lilo has no special support for booting from them either -- it sees them as 1 single SDA1 device as presented by the BIOS.
Lilo sees a BIOS device, that's all. The kernel that boots later does see more than one device. Want proof? Boot a rescue system that doesn't load the dmraid stuff and presto, you'll see the separate drives that constitute the RAID array. That's what distinguishes software from hardware RAID: in hw raid the systems only see the RAID drives you defined, nothing else. In a BIOS or an md RAID booting directly from RAID with either LILO or grub is only reliable when its a raid 1, i.e. when mirroring as it doesn't matter which drive the boot loader accesses. When you have a striping raid like raid 0 it's pure luck if booting succeeds. Philipp -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Philipp Thomas wrote:
On Sat, 25 Aug 2012 03:50:33 -0700, Linda Walsh <suse@tlinx.org> wrote:
Holy poop, you are kidding? Lilo has no special support for booting from them either -- it sees them as 1 single SDA1 device as presented by the BIOS.
Lilo sees a BIOS device, that's all. The kernel that boots later does see more than one device.
I think we are experiencing different HW/SW. I see the exact opposite of what you describe. With the dell 'virtual disk' SCSI disk device under a "SAS 6iR integrated workstation controller, there is nothing visible from any piece of HW that shows me the individual parts -- it IS on windows (but when I had an older machine with similar HW running linux, it saw it as 1 disk as well with no way to query individual disk stati or even see how many. My linux box runs with 2 HW raid controllers -- a dell Perc i/6 card (kernel sees 1078) and an LSI 9280DE-8e which kernel sees with a 2108 driver -- which is a bit odd, since the 2108, I think is the 6Mb line, and I think my card is only 3Mb... probably same driver though as the cards are the same family. Anyway.. In the HW setup, I set much more introspection into what's there than with the SW setup. Under /sys/devices/<bus> -- I see the SLOTS (enclosures) that each disk is plugged into (not sure what benefit this is yet ... but I can see 8 enclosures for the internal HD controller -- and under each enclosure I can see /sys/devices/pci0000:00/0000:00:0a.0/0000:02:00.0/host1/target1:0:32/1:0:32:0/enclosure/1:0:32:0/0: active : 0 fault : 0 locate : 0 status : OK type : array device /sys/devices/pci0000:00/0000:00:0a.0/0000:02:00.0/host1/target1:0:32/1:0:32:0/enclosure/1:0:32:0/0/power: async : disabled control : auto runtime_active_kids : 0 runtime_active_time : 0 runtime_enabled : disabled runtime_status : unsupported runtime_suspended_time : 0 runtime_usage : 0 ---- Have no idea what is real...since runtime usage isn't being kept. Don't know what aync is in relation to 'power' says 'active' 0... maybe that means being accessed the moment? I'd guess locate - writing to that might blink some enclosure light.
Want proof? Boot a rescue system that doesn't load the dmraid stuff and presto, you'll see the separate drives that constitute the RAID array.
Not if they are not DM RAID compatible -- I have booted off of rescue on my old sys that had a similar BIOS-RAID -- rescue saw it as one device as well. Note -- I **boot* from my hard disk -- NOT from a ramdisk. I don't make a RAM disk to load so my kernel boots. The kernel includes all it needs to access the HD -- and loads modules that aren't mandatory for boot, as needed. We might be talking some different terms somewhere or else have different experiences due to hardware diffs.. But my SW raid in BIOS looks like 1 device to linux and win, (with no way to see what's under/in it), while my HW raid looks like 1 device for boot, but with the system up, I can instrospect on the parts. The SW (Bios) raid doesn't give me the ability to introspect on the parts -- so the SW only sees it as 1 device. That's why I'd be real surprised if GRUB didn't work with this type of SW RAID (primitive as it is... but sufficient if all you want is a RAID0/RAID1. I use raid0 for my workstation with small fast disks -- and store all data on my linux-server. Ever since the switch to win7, my workstation hasn't been reliable enough to store data on -- has lost it's disk due to window's 7 eating them at least three - four times since I've had it. In no case was there a HW problem -- but beware Windows system-restore when it can't complete/finish the restore -- as it tries to roll it back, "to leave your system unchanged", but doesn't do a very good job sometimes. (30% files randomly 'gone')... Have had that happen twice since installed in 2009, and and 2 other times I needed to restore from an image or reinstall, once, and then reinstall my disk files over the install and my last registry backup -- restored the system to normal (for win7 anyway) function... Only data loss I had on linux was self-induced -- on a disk that contained downloadable-content only. Unfortunately it had grown from something easily redownloadable to something that took ~ 3-5 months to recover most of the bits and pieces from. I now keep a local backup of my downloadable content as well... ;-) None of the above touches on why anyone would consider it normal to require multiple file-systems in order to boot. I don't even require a RAM disk, whereas the normal suse boot process uses one, and, at one point, recommended the use of a non-xfs file system to boot from. That was caused solely by a bug in grub -- why that wasn't enough to revive LILO as a first-class alternative to grub, I dunno... But it was enough to drive me to lilo first-class or not -- that and the ability to boot from Hard disk and not have to use a RAM disk are bonuses for me -- making boot notably faster. A moderate sized server coming up in ~25 seconds from start of kernel load isn't bad -- that time can be more if there are problems in startup (like network cards no longer talking under Kerns>3.2.x)... Or if I have left open active snapshot volumes...(those can take a long time to reconstruct depending on how long the snapshot was going before reboot)... -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Sat, 25 Aug 2012 16:50:01 -0700, Linda Walsh <suse@tlinx.org> wrote:
I see the exact opposite of what you describe. With the dell 'virtual disk' SCSI disk device under a "SAS 6iR integrated workstation controller, there is nothing visible from any piece of HW that shows me the individual parts
You never so much as hinted that you where talking about SAS i.e. SCSI onboard devices. I thought we were talking about the *much* more common SATA RAID on consumer level motherboards and with those you will see what I wrote a number of times. Workstation boards and onboard SCSI controllers are in a completely different league and I believe you that these do hide everything. In that case bootloaders will only see one device and will work in most cases. Philipp -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Philipp Thomas wrote:
On Sat, 25 Aug 2012 16:50:01 -0700, Linda Walsh <suse@tlinx.org> wrote:
I see the exact opposite of what you describe. With the dell 'virtual disk' SCSI disk device under a "SAS 6iR integrated workstation controller, there is nothing visible from any piece of HW that shows me the individual parts
You never so much as hinted that you where talking about SAS i.e. SCSI onboard devices. I thought we were talking about the *much* more common SATA RAID on consumer level motherboards and with those you will see what I wrote a number of times.
I thought SATA devices were moving toward SAS? Most of the disks I have hooked up to my SAS controller ARE 7.2K SATA's. Among the disks, only 3/32 devices are really SAS drives, so the detail that it's a SAS controller isn't something normally first and foremost on my mind. It's a very small cost difference to get a SAS controller that handles both types of drives vs. a SATA-only controller that is far more limited. I'd be surprised if many modern machines don't have some inexpensive option for upgrade. Dell doesn't offer a SATA-only based workstation controller for their workstations or servers. It would take too much work to cripple it enough to save <1% on some configs (they do offer SATA only configurations, just not SATA-only controllers in their workstation line (for the past 12+ years). So it never occurred to me that I'd even need to mention such. Sorry. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Sun, 26 Aug 2012 10:46:24 -0700, Linda Walsh <suse@tlinx.org> wrote:
I thought SATA devices were moving toward SAS?
No, they're not. They use the same cabling and connectors but there it ends. That's why you can hook SATA drives to SAS host controllers but not the other way round as SATA controllers don't speak SCSI over serial lines.
Dell doesn't offer a SATA-only based workstation controller for their workstations or servers.
Of cause not. But if we're talking workstations and servers we're in a totally different market with vastly differing prices. Just look at what you pay for SAS drives or workstation graphics cards like ATI FireGL or NVidias Quattro. So it needs to be made explicit about what hardware we're speaking in order to avoid needless discussions. Philipp -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
participants (6)
-
Carlos E. R.
-
Dave Howorth
-
Felix Miata
-
Linda Walsh
-
Philipp Thomas
-
Philipp Thomas