[opensuse] software RAID vs BIOS RAID
![](https://seccdn.libravatar.org/avatar/5ac12776af8e11a58b0b513293feb2d6.jpg?s=120&d=mm&r=g)
I am about to set up RAID-1 on my system, and I am looking for some experienced opinions. Here is my situation. I only need a mirror image drive to protect my data against drive failure, so RAID-1 is the route I would like to take. I don't want to have to reinstall my OS, as I have a ton of things installed and configured now that is exactly how I want it. My original drive is a 500GB, partitioned like so: /dev/sda1 2048 4208639 2103296 82 Linux swap / Solaris /dev/sda2 * 4208640 46153727 20972544 83 Linux /dev/sda3 46153728 976773119 465309696 83 Linux I have recently purchased a 1TB drive to use as the mirror image, with the intent that the 2nd 500GB on this drive will be used for other purposes, like maybe testing a new installation when it comes out, or just having extra data that I don't need on the mirror image partition. I have installed the 1TB drive and hooked it up. It is not yet partitioned. I have done some research on the internet, and I found some tech sites that said that using the RAID setup in the BIOS is easy to set up, and you can do it without having to reinstall your OS. However, other sites have said that you do have to reinstall your OS, as it will wipe the original drive. (In any case, it would have been easier if I had set up RAID before the initial install, but too late now.) My BIOS setup only has 1 line indicating RAID, in the "IDE setup" menu, where it allows you to configure nVidia RAID as enabled or disabled. The motherboard user guide (it is an ASUS M2N68-AM SE2) doesn't give any other information. If I put the RAID setting to enabled, and then on my next subsequent reboot, will the BIOS would run me through a setup utility to setup RAID? If I knew for sure that it wouldn't wipe my original hard drive, I would tend to go that way, as it seems simpler. This website, "http://lifehacker.com/352472/set-up-real+time-bulletproof-backup-drive-redun...", indicated a very simple setup, but that is only 1 guy so I am skeptical if it is really as simple as he makes it out to be. So for you all that have experience in this, do most/all BIOS setups that support RAID allow for setting up RAID after the OS is already installed on the first drive? I know that some of you probably prefer software RAID, and I see that Yast has a partitioner and the means to setup linux software RAID. If you are a software RAID advocate, what would be the advantages to me of using software RAID over BIOS RAID? Thanks in advance for help in making my decision. George -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
![](https://seccdn.libravatar.org/avatar/b4047644c59f2d63b88e9464c02743fd.jpg?s=120&d=mm&r=g)
On 9/13/2011 6:13 PM, George OLson wrote:
I am about to set up RAID-1 on my system, and I am looking for some experienced opinions.
Here is my situation. I only need a mirror image drive to protect my data against drive failure, so RAID-1 is the route I would like to take. I don't want to have to reinstall my OS, as I have a ton of things installed and configured now that is exactly how I want it.
My original drive is a 500GB, partitioned like so:
/dev/sda1 2048 4208639 2103296 82 Linux swap / Solaris /dev/sda2 * 4208640 46153727 20972544 83 Linux /dev/sda3 46153728 976773119 465309696 83 Linux
I have recently purchased a 1TB drive to use as the mirror image, with the intent that the 2nd 500GB on this drive will be used for other purposes, like maybe testing a new installation when it comes out, or just having extra data that I don't need on the mirror image partition. I have installed the 1TB drive and hooked it up. It is not yet partitioned.
I have done some research on the internet, and I found some tech sites that said that using the RAID setup in the BIOS is easy to set up, and you can do it without having to reinstall your OS. However, other sites have said that you do have to reinstall your OS, as it will wipe the original drive. (In any case, it would have been easier if I had set up RAID before the initial install, but too late now.)
My BIOS setup only has 1 line indicating RAID, in the "IDE setup" menu, where it allows you to configure nVidia RAID as enabled or disabled. The motherboard user guide (it is an ASUS M2N68-AM SE2) doesn't give any other information.
If I put the RAID setting to enabled, and then on my next subsequent reboot, will the BIOS would run me through a setup utility to setup RAID? If I knew for sure that it wouldn't wipe my original hard drive, I would tend to go that way, as it seems simpler. This website, "http://lifehacker.com/352472/set-up-real+time-bulletproof-backup-drive-redun...", indicated a very simple setup, but that is only 1 guy so I am skeptical if it is really as simple as he makes it out to be.
So for you all that have experience in this, do most/all BIOS setups that support RAID allow for setting up RAID after the OS is already installed on the first drive?
I know that some of you probably prefer software RAID, and I see that Yast has a partitioner and the means to setup linux software RAID.
If you are a software RAID advocate, what would be the advantages to me of using software RAID over BIOS RAID?
With software raid you can apply raid after the fact. Yast does this for you. I had to do this, (back in 11.0 I believe) where I had a data partition on a single drive, and decided to add another for raid 1. I built the array in yast just to see if it could be done. (all my prior setups were done with Madm at the command line). I told yast that the second drive was a hot spare, and that the array was running degraded. It rebuilt it as soon as it fired up. Oh, and yes, I DID take a backup before setting this up, and so should you, but then you know this, anyone careful enough to be setting up raid knows this. At one time there was a problem having /boot on raid, and its been a while since I had to reconfigure a fresh box, so I don't know if this is still the case. -- _____________________________________ ---This space for rent--- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
![](https://seccdn.libravatar.org/avatar/184f2936f5d39b27534f4dd7c4d15bfb.jpg?s=120&d=mm&r=g)
John Andersen wrote:
At one time there was a problem having /boot on raid, and its been a while since I had to reconfigure a fresh box, so I don't know if this is still the case.
With lilo it works fine, but I don't know about grub. -- Per Jessen, Zürich (16.0°C) -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
![](https://seccdn.libravatar.org/avatar/ba86f283d614d2cd9b6116140eaddded.jpg?s=120&d=mm&r=g)
Per Jessen wrote:
John Andersen wrote:
At one time there was a problem having /boot on raid, and its been a while since I had to reconfigure a fresh box, so I don't know if this is still the case.
With lilo it works fine, but I don't know about grub.
It doesn't work with grub. With RAID and also LVM, /boot has to be on a regular partition. I recently set up a server with four 1 TB drives, with LVM on RAID 4. I created a 2 GB partition to hold /boot and used the other 3 2 GB partitions for swap. Everything else is in the LVM on RAID. That system will soon also have the data backed up to another computer in a different country. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
![](https://seccdn.libravatar.org/avatar/a836ff90f492078f494adcf0c6059fc6.jpg?s=120&d=mm&r=g)
On 2011/09/15 06:08 (GMT-0400) James Knott composed:
Per Jessen wrote:
John Andersen wrote:
At one time there was a problem having /boot on raid, and its been a while since I had to reconfigure a fresh box, so I don't know if this is still the case.
With lilo it works fine, but I don't know about grub.
It doesn't work with grub.
It doesn't work with Grub Legacy (default and fully supported in openSUSE). It's claimed to work with Grub2 (a mini OS, much more complicated and powerful than Grub Legacy). -- "The wise are known for their understanding, and pleasant words are persuasive." Proverbs 16:21 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
![](https://seccdn.libravatar.org/avatar/8434092a3798a0467c3f2371ef030fc6.jpg?s=120&d=mm&r=g)
On 9/15/2011 6:08 AM, James Knott wrote:
Per Jessen wrote:
John Andersen wrote:
At one time there was a problem having /boot on raid, and its been a while since I had to reconfigure a fresh box, so I don't know if this is still the case. With lilo it works fine, but I don't know about grub.
It doesn't work with grub. With RAID and also LVM, /boot has to be on a regular partition. I recently set up a server with four 1 TB drives, with LVM on RAID 4. I created a 2 GB partition to hold /boot and used the other 3 2 GB partitions for swap. Everything else is in the LVM on RAID. That system will soon also have the data backed up to another computer in a different country.
It works fine, it just has to be raid1. I usually don't actually do this any more simply because it's not worth the fuss, but just for reference, it's perfectly doable and works fine. In a few cases where for whatever reason I can't install a usb thumb drive to boot from I do still do this. * You make a small boot partition on one drive, * fdisk type "fd" linux raid autodetect, * mark it bootable, * clone the partition table to all other drives, * create a raid1 array using all the partition-1's, * put /boot on that array in yast, And you're pretty much done. When the bios boots, it picks one of the drives, boots grub from that drives mbr or from that drives bootable partition, grub reads it's files just fine from whatever drive the bios happened to pick as the boot drive, grub does not know or car that the filesystem it's reading is normally a member of a raid1 array. At boot time it's just a plain filesystem on a plain partition. The important factors are: * The mdadm raid metadata does not modify the individual filesystems that it's maintaining copies of. Each copy is still a valid free-standing filesystem as if it were never part of an array. This is not necessarily true for other raid implementations but it is true for linux mdraid. This means that when the bios boots grub or other boot loader, the bootloader does not have to include a raid driver to read the partition or the filesystem, it can read any individual raid1 volume as a plain filesystem on a plain partition on a plain disk. * The bootloader in most cases is purely read-only. It does not modify one byte of the data in the raid volume it's reading, and so a few seconds later when the kernel loads up and starts looking around for raid arrays to assemble, all the volumes of the raid1 array are still consistent. The raid1 array assembles just fine every time. Once the kernel has done that all further writes until power-off are written to the array not any single drive so no problem. (assuming the OS bootloader manager tools are configured correctly to write to /dev/md0 not /dev/sd* as per my other post) Some actual commands for an example 8-drive box: Start a fresh install and either switch to another screen for a normal local console install, or for a remote text mode install, use ssh and don't start yast in the first place when you first log in. Either way, get to a shell after the install environment is loaded up but before yast has gone past the first screen or two. Use fdisk or sfdisk or parted to partition one drive /dev/sda with say a 512M or 1G /boot partition. You can't grow this later and you may end up needing to store several different versions of kernels and accompanying large initrd's, not to mention various other possible boot files like maybe a knoppix or puppy linux whole system in ram image, and you don't want kernel updates to fail in a couple years because it's out of room. You may want to make /boot even say 5G. But definitely 512M at least just to allow normal room for kernels and initrd's if you ever turn on multiple versions for testing kotd etc. And one big everything-else partition. Knock yourself out making more partitions if you want for /home /var swap etc... that would just point out even more why not to do this part manually in yast during install. Mark the /boot one active (bootable), mark them both type "fd" linux raid autodetect, not type 83 linux. Then clone sda to sdb: # sfdisk -d /dev/sda |sfdisk /dev/sdb Then use the shell history to repeat the command for the rest of the drives. Up-arrow, edit last character, enter, repeat 6 times, bang bang bang done. # sfdisk -d /dev/sda |sfdisk /dev/sdc # sfdisk -d /dev/sda |sfdisk /dev/sdd # sfdisk -d /dev/sda |sfdisk /dev/sde # sfdisk -d /dev/sda |sfdisk /dev/sdf # sfdisk -d /dev/sda |sfdisk /dev/sdg # sfdisk -d /dev/sda |sfdisk /dev/sdh This is for MSDOS partition tables which are still the norm. Unfortunately last time I looked (not too recently) there was no equally efficient way to clone GUID partition tables with parted or anything else. But luckily GPT are still not the norm and generally not necessary and the nice sfdisk way is available. Then make sure the raid modules you will need are loaded, usually raid 0, 1, and 456 are loaded by default, and these days raid10 is present in the install environment but not loaded by default. If you want raid10 and you want to use the nice raid10 module which is a bit more sophisticated and a heck of a lot easier than manually using raid0 and raid1 on top of each other, just "modprobe raid10" Then create the raid1 /boot array: # mdadm -C -l1 -n8 /dev/md0 /dev/sd{a,b,c,d,e,f,g,h}1 Then create the / array, lets say raid5 so you don't have to worry about the modprobe issue: # mdadm -C -l5 -n8 /dev/md1 /dev/sd{a,b,c,d,e,f,g,h}2 Those are literal valid shell syntax and there are a few reasons to actually type it just that way. * easier and faster than /dev/sda1 /dev/sdb1 /dev/sdc1... * less error-prone, you can't accidentally forget one of the 2's or mistakenly make it a 1 because you imperfectly edited the previous command with all 1's * the smaller syntax /dev/sd[a-h]2 only works for contiguous consecutive ranges which may not be the case, and doesn't work in the installers less feature-rich shell, possibly not the emergency shell in the initrd during a failed boot attempt either. Then either return to the yast screen if it's already running or run yast now, and the arrays md0 and md1 will appear and be selectable in yast. Put /boot on md0 and / on md1. You _can_ do all that manually completely from within yast but it's sooo many clicks and steps and entering values manually, correctly, repeatedly, into fields. It's very error prone and tedious. But for only a few drives and only one machine one time, maybe it's simpler than going to the shell if your not used to it. -- bkw -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
![](https://seccdn.libravatar.org/avatar/b4047644c59f2d63b88e9464c02743fd.jpg?s=120&d=mm&r=g)
On 9/15/2011 8:54 AM, Brian K. White wrote:
* The mdadm raid metadata does not modify the individual filesystems that it's maintaining copies of. Each copy is still a valid free-standing filesystem as if it were never part of an array. This is not necessarily true for other raid implementations but it is true for linux mdraid. This means that when the bios boots grub or other boot loader, the bootloader does not have to include a raid driver to read the partition or the filesystem, it can read any individual raid1 volume as a plain filesystem on a plain partition on a plain disk.
And this fact saved my bacon on two occasions. Both in the same machine, a week apart from each other, when two drives from the same lot failed withing one week of each other. BTW: Steve Boley at Dell posted a mini-how-to to boot from Software Raid 1, way pack in 2003. http://lists.us.dell.com/pipermail/linux-poweredge/2003-July/008898.html -- _____________________________________ ---This space for rent--- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
![](https://seccdn.libravatar.org/avatar/247f3737bfdd07c80a5411399e9a504c.jpg?s=120&d=mm&r=g)
John Andersen wrote:
With software raid you can apply raid after the fact. Yast does this for you. I had to do this, (back in 11.0 I believe) where I had a data partition on a single drive, and decided to add another for raid 1. I built the array in yast just to see if it could be done. (all my prior setups were done with Madm at the command line). I told yast that the second drive was a hot spare, and that the array was running degraded. It rebuilt it as soon as it fired up.
I'm curious as to how this worked. Don't mdadm physical volumes have a different partition type to a 'regular' filesystem? On the other hand, I can see how the instructions at <https://wiki.archlinux.org/index.php/Convert_a_single_drive_system_to_RAID> would work to perform the task. i.e.: * Create a single-disk RAID-1 array with our new disk * Move all your data from the old-disk to the new RAID-1 array * Verify the data move was successful * Wipe the old disk and add it to the new RAID-1 array -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
![](https://seccdn.libravatar.org/avatar/184f2936f5d39b27534f4dd7c4d15bfb.jpg?s=120&d=mm&r=g)
Dave Howorth wrote:
John Andersen wrote:
With software raid you can apply raid after the fact. Yast does this for you. I had to do this, (back in 11.0 I believe) where I had a data partition on a single drive, and decided to add another for raid 1. I built the array in yast just to see if it could be done. (all my prior setups were done with Madm at the command line). I told yast that the second drive was a hot spare, and that the array was running degraded. It rebuilt it as soon as it fired up.
I'm curious as to how this worked. Don't mdadm physical volumes have a different partition type to a 'regular' filesystem?
0xFD for raid auto-detect.
On the other hand, I can see how the instructions at
<https://wiki.archlinux.org/index.php/Convert_a_single_drive_system_to_RAID>
would work to perform the task. i.e.:
* Create a single-disk RAID-1 array with our new disk * Move all your data from the old-disk to the new RAID-1 array * Verify the data move was successful * Wipe the old disk and add it to the new RAID-1 array
I'm pretty certain I've done something like that in the past. -- Per Jessen, Zürich (16.3°C) -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
![](https://seccdn.libravatar.org/avatar/247f3737bfdd07c80a5411399e9a504c.jpg?s=120&d=mm&r=g)
Per Jessen wrote:
Dave Howorth wrote:
I'm curious as to how this worked. Don't mdadm physical volumes have a different partition type to a 'regular' filesystem?
0xFD for raid auto-detect.
Indeed, and 0xDA for more modern RAIDs IIRC. Ah yes .. http://comments.gmane.org/gmane.linux.raid/19311 -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
![](https://seccdn.libravatar.org/avatar/b4047644c59f2d63b88e9464c02743fd.jpg?s=120&d=mm&r=g)
On 9/13/2011 6:13 PM, George OLson wrote:
I have recently purchased a 1TB drive to use as the mirror image, with the intent that the 2nd 500GB on this drive will be used for other purposes, like maybe testing a new installation when it comes out, or just having extra data that I don't need on the mirror image partition. I have installed the 1TB drive and hooked it up. It is not yet partitioned.
Some bios raid solutions don't allow mixed drive sizes. Some of them mirror the whole drive and are unaware what OS is actually installed. Others are fake raid and require drivers. But most of them require matched drives. Software raid does not. Better check what your bios allows. -- _____________________________________ ---This space for rent--- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
![](https://seccdn.libravatar.org/avatar/a836ff90f492078f494adcf0c6059fc6.jpg?s=120&d=mm&r=g)
On 2011/09/14 09:13 (GMT+0800) George OLson composed:
I am about to set up RAID-1 on my system, and I am looking for some experienced opinions.
My original drive is a 500GB, partitioned like so:
/dev/sda1 2048 4208639 2103296 82 Linux swap / Solaris /dev/sda2 * 4208640 46153727 20972544 83 Linux /dev/sda3 46153728 976773119 465309696 83 Linux
I have recently purchased a 1TB drive to use as the mirror image, with the intent that the 2nd 500GB on this drive will be used for other purposes, like maybe testing a new installation when it comes out, or just having extra data that I don't need on the mirror image partition. I have installed the 1TB drive and hooked it up. It is not yet partitioned.
I know that some of you probably prefer software RAID, and I see that Yast has a partitioner and the means to setup linux software RAID.
If you are a software RAID advocate, what would be the advantages to me of using software RAID over BIOS RAID?
I have two RAID1 systems. I couldn't see any advantage to using BIOS fake RAID over software RAID, and don't believe what I wanted to do would even be possible that way. http://fm.no-ip.com/Tmp/Linux/big31L03.txt shows partitioning as built. http://fm.no-ip.com/Tmp/Linux/big31L06a.txt shows after doubling the size of the smaller original #1 HD, but before I completely finished reconfiguring to make use of the available space increase. -- "The wise are known for their understanding, and pleasant words are persuasive." Proverbs 16:21 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
![](https://seccdn.libravatar.org/avatar/27aacf61a13c66fcc083fcf8a84823bc.jpg?s=120&d=mm&r=g)
On 09/13/2011 08:13 PM, George OLson wrote:
I have recently purchased a 1TB drive to use as the mirror image, with the intent that the 2nd 500GB on this drive will be used for other purposes, like maybe testing a new installation when it comes out, or just having extra data that I don't need on the mirror image partition. I have installed the 1TB drive and hooked it up. It is not yet partitioned.
I have done some research on the internet, and I found some tech sites that said that using the RAID setup in the BIOS is easy to set up, and you can do it without having to reinstall your OS. However, other sites have said that you do have to reinstall your OS, as it will wipe the original drive. (In any case, it would have been easier if I had set up RAID before the initial install, but too late now.)
You are correct - BIOS RAID is not an option if you want to save your current install. (unless you use dd to block copy your OS off to another spare drive, then install 2 [same size] drives, set up the bios raid, boot from the install cd and use dd to reinstall your OS onto the new mirror). Yes, you can mirror all partitions (/boot, /, /home & SWAP). I have run both BIOS RAID (dmraid) (called Fake RAID) and Linux Software RAID (mdraid). Both are great raid-1 solutions. Both have comparable performance (the software mirror demand on the system is negligible and you will not notice any performance hit with testparm benchmarks) You will have to get input from others on 'howto' create a software raid on the new disk from an existing install (I haven't done that). However, software raid is very flexible, so you may be able to do what you need without too much grief. In some respects software raid is a better solution. It is easier to move disks between boxes should you have that need. It is possible with dmraid, but it involves setting up a new pair of disks and then block copying from one of the drives onto the new array. Both will give you the single disk fault tolerance that RAID is designed to provide. In case of a disk failure, you can continue to run from either in 'degraded' (single disk) mode until you have replacement hardware. However - NEITHER dmraid or mdraid is a substitute for prudent BACKUPS. Any failure that will scatter the data on the array will happily scatter the data on both disks (controller or other hardware failure). So RAID isn't an excuse to not backup. Good luck. You will be satisfied with either solution. I use dmraid if I have a raid capable bios and software raid if I don't. That simple. I've been happy with both for the past decade. -- David C. Rankin, J.D.,P.E. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
![](https://seccdn.libravatar.org/avatar/1dea64c351deaab589ae75daff9fe44e.jpg?s=120&d=mm&r=g)
When I first started in Linux and found out about RAID, I setup RAID 1. THEN, unfortunately, the way I play with my system, I found RAID to be very detrimental because I accidentally messed up where I could not boot. THAT meant my partner drive was also messed up. So I took of RAID and every night I manually run "rsync" to keep the two drives in sync. At least for me, this provides a "live" backup and a "simulated" RAID condition. A good example of the use of this simulated RAID, is when I update to newer versions. Before I start the upgrade, I rsync the drives. This way I have a quick backout plan. So in conclusion, RAID 1 is GREAT until you do a major screw up - then BOTH drives are not usable. Regards, Duaine -- Duaine Hechler Piano, Player Piano, Pump Organ Tuning, Servicing& Rebuilding Reed Organ Society Member Florissant, MO 63034 (314) 838-5587 dahechler@att.net www.hechlerpianoandorgan.com -- Home& Business user of Linux - 11 years -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
![](https://seccdn.libravatar.org/avatar/a836ff90f492078f494adcf0c6059fc6.jpg?s=120&d=mm&r=g)
On 2011/09/13 23:06 (GMT-0500) Duaine Hechler composed:
So I took of RAID and every night I manually run "rsync" to keep the two drives in sync.
At least for me, this provides a "live" backup and a "simulated" RAID condition.
A good example of the use of this simulated RAID, is when I update to newer versions.
Before I start the upgrade, I rsync the drives. This way I have a quick backout plan.
So in conclusion, RAID 1 is GREAT until you do a major screw up - then BOTH drives are not usable.
With HDs so large as they are any more it borders on incompetence to not have multiple / partitions, a first, a next, and a 3rd or more for experimental. The first is just that, the first OS installed. Then at "upgrade" time, install the "upgrade" to the "next" partition, leaving the first undisturbed. Only after "next" is confirmed suitable do you "convert" it to main, which first immediately before was, and after which first becomes an online backup until next2 becomes available. Meanwhile a third or more are available for testing devel version(s) and/or other distros. In this scenario, /home and other user data partitions, if any, are separate, and mountable as such under any / you have booted. Some care must be taken about user data to prevent corruption switching among non-matching versions of software under the various installed versions of OS, but this is not difficult. My two RAID1 systems have 3 OS / md devices each, one md device for /tmp, one md device for /home, and couple of other md devices for other data. /boot I don't make into RAID because I see little point. I clone (then set a new UUID and label) the /boot from the #1 HD to the #2 so that it can readily be used as a sole boot device in case the #1 HD dies. I use labels for devices in menu.lst and fstab, which are a bit easier for human eyes to maintain than device ID or UUID. I have eSATA HDs for backing up, which are only powered at backup times, but much faster at transferring data than USB 2.0. -- "The wise are known for their understanding, and pleasant words are persuasive." Proverbs 16:21 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
![](https://seccdn.libravatar.org/avatar/1dea64c351deaab589ae75daff9fe44e.jpg?s=120&d=mm&r=g)
On 09/14/2011 12:28 AM, Felix Miata wrote:
On 2011/09/13 23:06 (GMT-0500) Duaine Hechler composed:
So I took of RAID and every night I manually run "rsync" to keep the two drives in sync.
At least for me, this provides a "live" backup and a "simulated" RAID condition.
A good example of the use of this simulated RAID, is when I update to newer versions.
Before I start the upgrade, I rsync the drives. This way I have a quick backout plan.
So in conclusion, RAID 1 is GREAT until you do a major screw up - then BOTH drives are not usable.
With HDs so large as they are any more it borders on incompetence to not have multiple / partitions, a first, a next, and a 3rd or more for experimental. The first is just that, the first OS installed. Then at "upgrade" time, install the "upgrade" to the "next" partition, leaving the first undisturbed. Only after "next" is confirmed suitable do you "convert" it to main, which first immediately before was, and after which first becomes an online backup until next2 becomes available. Meanwhile a third or more are available for testing devel version(s) and/or other distros. In this scenario, /home and other user data partitions, if any, are separate, and mountable as such under any / you have booted. Some care must be taken about user data to prevent corruption switching among non-matching versions of software under the various installed versions of OS, but this is not difficult.
My two RAID1 systems have 3 OS / md devices each, one md device for /tmp, one md device for /home, and couple of other md devices for other data. /boot I don't make into RAID because I see little point. I clone (then set a new UUID and label) the /boot from the #1 HD to the #2 so that it can readily be used as a sole boot device in case the #1 HD dies. I use labels for devices in menu.lst and fstab, which are a bit easier for human eyes to maintain than device ID or UUID.
I have eSATA HDs for backing up, which are only powered at backup times, but much faster at transferring data than USB 2.0. Although, I already have a "/", swap and /home, I'm doing anything that I need anything this complicated. And, I've learned from my mainframe days, NOT to be on the bleeding edge of upgrading.
I'm just a simple home and small business user of Linux. And, if I really want to experiment, I can always use VirtualBox. Duaine -- Duaine Hechler Piano, Player Piano, Pump Organ Tuning, Servicing& Rebuilding Reed Organ Society Member Florissant, MO 63034 (314) 838-5587 dahechler@att.net www.hechlerpianoandorgan.com -- Home& Business user of Linux - 11 years -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
![](https://seccdn.libravatar.org/avatar/1dea64c351deaab589ae75daff9fe44e.jpg?s=120&d=mm&r=g)
On 09/14/2011 01:20 AM, Duaine Hechler wrote:
On 09/14/2011 12:28 AM, Felix Miata wrote:
On 2011/09/13 23:06 (GMT-0500) Duaine Hechler composed:
So I took of RAID and every night I manually run "rsync" to keep the two drives in sync.
At least for me, this provides a "live" backup and a "simulated" RAID condition.
A good example of the use of this simulated RAID, is when I update to newer versions.
Before I start the upgrade, I rsync the drives. This way I have a quick backout plan.
So in conclusion, RAID 1 is GREAT until you do a major screw up - then BOTH drives are not usable.
With HDs so large as they are any more it borders on incompetence to not have multiple / partitions, a first, a next, and a 3rd or more for experimental. The first is just that, the first OS installed. Then at "upgrade" time, install the "upgrade" to the "next" partition, leaving the first undisturbed. Only after "next" is confirmed suitable do you "convert" it to main, which first immediately before was, and after which first becomes an online backup until next2 becomes available. Meanwhile a third or more are available for testing devel version(s) and/or other distros. In this scenario, /home and other user data partitions, if any, are separate, and mountable as such under any / you have booted. Some care must be taken about user data to prevent corruption switching among non-matching versions of software under the various installed versions of OS, but this is not difficult.
My two RAID1 systems have 3 OS / md devices each, one md device for /tmp, one md device for /home, and couple of other md devices for other data. /boot I don't make into RAID because I see little point. I clone (then set a new UUID and label) the /boot from the #1 HD to the #2 so that it can readily be used as a sole boot device in case the #1 HD dies. I use labels for devices in menu.lst and fstab, which are a bit easier for human eyes to maintain than device ID or UUID.
I have eSATA HDs for backing up, which are only powered at backup times, but much faster at transferring data than USB 2.0. Although, I already have a "/", swap and /home, I'm doing anything that I need anything this complicated. And, I've learned from my mainframe days, NOT to be on the bleeding edge of upgrading.
I'm just a simple home and small business user of Linux.
And, if I really want to experiment, I can always use VirtualBox.
Duaine
That should read - I'm doing nothing ....... -- Duaine Hechler Piano, Player Piano, Pump Organ Tuning, Servicing& Rebuilding Reed Organ Society Member Florissant, MO 63034 (314) 838-5587 dahechler@att.net www.hechlerpianoandorgan.com -- Home& Business user of Linux - 11 years -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
![](https://seccdn.libravatar.org/avatar/a836ff90f492078f494adcf0c6059fc6.jpg?s=120&d=mm&r=g)
On 2011/09/14 01:23 (GMT-0500) Duaine Hechler composed:
Duaine Hechler wrote:
Although, I already have a "/", swap and /home, I'm doing anything that I need anything this complicated. And, I've learned from my mainframe days, NOT to be on the bleeding edge of upgrading.
I'm just a simple home and small business user of Linux.
And, if I really want to experiment, I can always use VirtualBox.
Duaine
That should read - I'm doing nothing .......
Here multiboot is not "complicated", it's SOP. Also, it works with machines that don't support more than 512M of RAM and/or are too slow for practical use of virtualization. Here, all but a small fraction of systems are configured multiboot, and none have virtualization configured. -- "The wise are known for their understanding, and pleasant words are persuasive." Proverbs 16:21 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
![](https://seccdn.libravatar.org/avatar/5ac12776af8e11a58b0b513293feb2d6.jpg?s=120&d=mm&r=g)
On 09/14/2011 01:28 PM, Felix Miata wrote:
With HDs so large as they are any more it borders on incompetence to not have multiple / partitions, a first, a next, and a 3rd or more for experimental. The first is just that, the first OS installed. Then at "upgrade" time, install the "upgrade" to the "next" partition, leaving the first undisturbed. Only after "next" is confirmed suitable do you "convert" it to main, which first immediately before was, and after which first becomes an online backup until next2 becomes available. Meanwhile a third or more are available for testing devel version(s) and/or other distros. In this scenario, /home and other user data partitions, if any, are separate, and mountable as such under any / you have booted. Some care must be taken about user data to prevent corruption switching among non-matching versions of software under the various installed versions of OS, but this is not difficult.
My two RAID1 systems have 3 OS / md devices each, one md device for /tmp, one md device for /home, and couple of other md devices for other data. /boot I don't make into RAID because I see little point. I clone (then set a new UUID and label) the /boot from the #1 HD to the #2 so that it can readily be used as a sole boot device in case the #1 HD dies. I use labels for devices in menu.lst and fstab, which are a bit easier for human eyes to maintain than device ID or UUID.
I have eSATA HDs for backing up, which are only powered at backup times, but much faster at transferring data than USB 2.0.
This discussion is very good and is giving me some ideas. I can see the value of having multiple roots, and I would like to move to that eventually. I followed Felix's advice and did that on my laptop, but when I set up this desktop I had not yet thought about that. So here is a question about multiple roots - when setting up 2 or 3 20gb root partitions for future experimentation, do those partitions have to be primary partitions? Also, do the root partitions have to be next to the swap and next to the /home partition, or can they be anywhere on the drive? Suppose I setup my TB drive so that it is like this: /dev/sdb1 2gb, linux swap to be copied from /dev/sda1 as RAID or with rsync /dev/sdb2 20gb, root partition be copied from /dev/sda2 as RAID or with rsync /dev/sdb3 443.75gb, data partition, copied from /dev/sda3 as RAID or with rsync /dev/sdb4 extended partition covering the next set of partitions /dev/sdb5 20gb, root partition for later use and experimentation (will I be able to point grub to this partition on another day if I install another system to test?) /dev/sdb6 20gb, root partition for later use and experimentation /dev/sdb7 remaining gb on the disk for extra data storage that will be backed up separately Is that a scheme that would work, and would give me the flexibility of being able to install the next upgrade in a different root to test it, and things like that? Thanks again, George -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
![](https://seccdn.libravatar.org/avatar/a836ff90f492078f494adcf0c6059fc6.jpg?s=120&d=mm&r=g)
On 2011/09/14 15:01 (GMT+0800) George OLson composed:
So here is a question about multiple roots - when setting up 2 or 3 20gb root partitions for future experimentation, do those partitions have to be primary partitions? Also, do the root partitions have to be next to the swap and next to the /home partition, or can they be anywhere on the drive?
Suppose I setup my TB drive so that it is like this:
/dev/sdb1 2gb, linux swap to be copied from /dev/sda1 as RAID or with rsync
I don't know if rsync can do swap partitions, or why anyone would want to. I don't see much point in using RAID for a swap partition either. Without, you'd have 4G instead of 2G.
/dev/sdb2 20gb, root partition be copied from /dev/sda2 as RAID or with rsync /dev/sdb3 443.75gb, data partition, copied from /dev/sda3 as RAID or with rsync /dev/sdb4 extended partition covering the next set of partitions /dev/sdb5 20gb, root partition for later use and experimentation (will I be able to point grub to this partition on another day if I install another system to test?)
Existing "Grubs" can be modified both to chainload later installed OSes and/or to load their kernels and initrds directly. The relevant contents of menu.lst are nothing but commands to Grub that could be run from a Grub shell if there was no menu.lst at all, one of the major advantages of Grub over Lilo.
/dev/sdb6 20gb, root partition for later use and experimentation /dev/sdb7 remaining gb on the disk for extra data storage that will be backed up separately
Is that a scheme that would work, and would give me the flexibility of being able to install the next upgrade in a different root to test it, and things like that?
All but your swap proposition is sane. I habitually put the root partitions much closer to the start of the HD, where traditionally faster I/O is available, and there is never a question of a BIOS being too old to reach a Grub that is installed way up high on the physical device. I have a lot of old systems where such situations could otherwise pop up. Whether 1T & up disks have significant speed variation from start to end I have no idea. -- "The wise are known for their understanding, and pleasant words are persuasive." Proverbs 16:21 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
![](https://seccdn.libravatar.org/avatar/184f2936f5d39b27534f4dd7c4d15bfb.jpg?s=120&d=mm&r=g)
Felix Miata wrote:
On 2011/09/14 15:01 (GMT+0800) George OLson composed:
So here is a question about multiple roots - when setting up 2 or 3 20gb root partitions for future experimentation, do those partitions have to be primary partitions? Also, do the root partitions have to be next to the swap and next to the /home partition, or can they be anywhere on the drive?
Suppose I setup my TB drive so that it is like this:
/dev/sdb1 2gb, linux swap to be copied from /dev/sda1 as RAID or with rsync
I don't know if rsync can do swap partitions, or why anyone would want to. I don't see much point in using RAID for a swap partition either.
If you lose swap on a running system, the system stops running. All my systems have swap-space on RAID1. -- Per Jessen, Zürich (16.2°C) -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
![](https://seccdn.libravatar.org/avatar/a836ff90f492078f494adcf0c6059fc6.jpg?s=120&d=mm&r=g)
On 2011/09/14 12:15 (GMT+0200) Per Jessen composed:
Felix Miata wrote:
I don't see much point in using RAID for a swap partition either.
If you lose swap on a running system, the system stops running. All my systems have swap-space on RAID1.
Isn't that true only for low memory systems actually using swap? My 24/7 RAID1 system boots with swap enabled, but usually one of the first things I do after boot is swapoff -a. Dedicated swap on systems with ample RAM seems to me to be an anachronism. -- "The wise are known for their understanding, and pleasant words are persuasive." Proverbs 16:21 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
![](https://seccdn.libravatar.org/avatar/184f2936f5d39b27534f4dd7c4d15bfb.jpg?s=120&d=mm&r=g)
Felix Miata wrote:
On 2011/09/14 12:15 (GMT+0200) Per Jessen composed:
Felix Miata wrote:
I don't see much point in using RAID for a swap partition either.
If you lose swap on a running system, the system stops running. All my systems have swap-space on RAID1.
Isn't that true only for low memory systems actually using swap?
Yes, it's probably only true for systems that actually use swap, but that's not reserved for "low-memory" systems :-)
My 24/7 RAID1 system boots with swap enabled, but usually one of the first things I do after boot is swapoff -a. Dedicated swap on systems with ample RAM seems to me to be an anachronism.
Not at all. My server systems have beetween 4G and 16G of memory, all have and all use swap (even if just a tiny bit). My new office systems have 2Gb and use swap - the older boxes only had 1Gb, and also used swap. If for instance you've got lots of stuff in KDE that you never use (I can count quite a few processes that I have no idea what are for), they end up being swapped out permanently, and the memory is available for the part of the system that I actually use. -- Per Jessen, Zürich (19.2°C) -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
![](https://seccdn.libravatar.org/avatar/a836ff90f492078f494adcf0c6059fc6.jpg?s=120&d=mm&r=g)
On 2011/09/14 16:45 (GMT+0200) Per Jessen composed:
Felix Miata wrote:
My 24/7 RAID1 system boots with swap enabled, but usually one of the first things I do after boot is swapoff -a. Dedicated swap on systems with ample RAM seems to me to be an anachronism.
Not at all. My server systems have beetween 4G and 16G of memory, all have and all use swap (even if just a tiny bit). My new office systems have 2Gb and use swap - the older boxes only had 1Gb, and also used swap. If for instance you've got lots of stuff in KDE that you never use (I can count quite a few processes that I have no idea what are for), they end up being swapped out permanently, and the memory is available for the part of the system that I actually use.
The way I remember it, if no dedicated swap partition exists, kernel will swap out to /. Right now on my 2.6.31 system referred to above, which has 4G of RAM and no swap partition enabled, with 5 web browsers with 100+ tabs open among them, and several other X apps open scattered among 6 virtual desktops, and Apache running in background, 51% of RAM is consumed by cache. I really don't see the point of having dedicated swap partition(s) on a typical desktop or laptop system. -- "The wise are known for their understanding, and pleasant words are persuasive." Proverbs 16:21 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/ -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
![](https://seccdn.libravatar.org/avatar/184f2936f5d39b27534f4dd7c4d15bfb.jpg?s=120&d=mm&r=g)
Felix Miata wrote:
On 2011/09/14 16:45 (GMT+0200) Per Jessen composed:
Felix Miata wrote:
My 24/7 RAID1 system boots with swap enabled, but usually one of the first things I do after boot is swapoff -a. Dedicated swap on systems with ample RAM seems to me to be an anachronism.
Not at all. My server systems have beetween 4G and 16G of memory, all have and all use swap (even if just a tiny bit). My new office systems have 2Gb and use swap - the older boxes only had 1Gb, and also used swap. If for instance you've got lots of stuff in KDE that you never use (I can count quite a few processes that I have no idea what are for), they end up being swapped out permanently, and the memory is available for the part of the system that I actually use.
The way I remember it, if no dedicated swap partition exists, kernel will swap out to /.
No, if your system doesn't have any swap-space (file or partition), no swapping will happen.
Right now on my 2.6.31 system referred to above, which has 4G of RAM and no swap partition enabled, with 5 web browsers with 100+ tabs open among them, and several other X apps open scattered among 6 virtual desktops, and Apache running in background, 51% of RAM is consumed by cache. I really don't see the point of having dedicated swap partition(s) on a typical desktop or laptop system.
The topic was also more about swapping on RAID1, which a typical desktop or laptop probably doesn't have. Nonetheless, my workstation also has 4Gb RAM: Mem: 4054784k total, 2433940k used, 1620844k free, 24k buffers Swap: 3911736k total, 1465600k used, 2446136k free, 1042500k cached Like I said, my new typical office systems with 2Gb use swap. Whether it is a dedicated partition or not, is probably of little significance. /Per -- Per Jessen, Zürich (19.4°C) -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
![](https://seccdn.libravatar.org/avatar/8434092a3798a0467c3f2371ef030fc6.jpg?s=120&d=mm&r=g)
On 9/14/2011 10:04 AM, Felix Miata wrote:
On 2011/09/14 12:15 (GMT+0200) Per Jessen composed:
Felix Miata wrote:
I don't see much point in using RAID for a swap partition either.
If you lose swap on a running system, the system stops running. All my systems have swap-space on RAID1.
Isn't that true only for low memory systems actually using swap? My 24/7 RAID1 system boots with swap enabled, but usually one of the first things I do after boot is swapoff -a. Dedicated swap on systems with ample RAM seems to me to be an anachronism.
On my production servers I haven't even allowed the installer to create a swap partition or swap file since the first server with 4G of ram. It's been years and years and no problems. If oom-killer ever kills anything, it's never been anything we noticed. Of course they don't hibernate or suspend so it's not needed for that either. swap never gets turned on in the first place because there is no swap in /etc/fstab so no need to turn it off. It's been too many machines with too many users doing too much work for too many years to be a debatable topic for me any more. Maybe it's only because of our particular workload and/or other factors that are specific to us, but, even if not necessarily for anyone else, it is definitely a proven theory and an answered question by now at least for us. Basically I just watched my swap usage in top, and determined that the only time any swap was getting used at all was just artificial things that don't count, like things that specifically tested for space by trying to use it. Exapmle: up 127 days, 104 users, Mem Used/Total 3.4G/3.6G , Swap Used/Total 3.1M/8.4G I have swap enabled on that box because of the number of users and the merely 4G ram and the somewhat spike-able nature of the workload it seemed prudent, but clearly it's not actually needed. Heck even on a box with only 2G ram, 47 users, up 44 days, 2G swap, 220K used. _K_. I don't even need it on that little 2G box. I entirely agree about the anachronism as long as your motherboard can support whatever amount of ram you would have used as swap anyways. That 2G box is a crappy little desktop board crammed into a cheap rackmount case. (I didn't buy it or make it or initially install it, but as long as it continues to do it's job I'll continue to let it.) Being a cheap desktop board, it only has two ram slots and can only support 2G ram max. So in that case it seemed like some swap was a good idea to make up for physical motherboard limitations. But that's a completely unusual situation any more and even there it turns out to never get used. -- bkw -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
![](https://seccdn.libravatar.org/avatar/8d38dde60257fffd229ae536674c6afe.jpg?s=120&d=mm&r=g)
When using MD-RAID 1 it should be possible to mount the individual partitions that comprise the raid array. Say you have /dev/md0 from /dev/sda1 and /dev/sdb1, if you stop the RAID array you should be able to mount /dev/sda1 and /dev/sdb1 individually. -- Med Vennlig Hilsen, A. Helge Joakimsen -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
![](https://seccdn.libravatar.org/avatar/184f2936f5d39b27534f4dd7c4d15bfb.jpg?s=120&d=mm&r=g)
Duaine Hechler wrote:
So in conclusion, RAID 1 is GREAT until you do a major screw up - then BOTH drives are not usable.
All of my individual systems (20+), local as well as external, use RAID1 and have done so for years. Todays harddisks, whether for desktop or server use, _will_ break. Data can be saved by backups, uptime can be saved by redundancy. RAID1 does both. -- Per Jessen, Zürich (16.1°C) -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
![](https://seccdn.libravatar.org/avatar/1dea64c351deaab589ae75daff9fe44e.jpg?s=120&d=mm&r=g)
On 09/14/2011 01:31 AM, Per Jessen wrote:
Duaine Hechler wrote:
So in conclusion, RAID 1 is GREAT until you do a major screw up - then BOTH drives are not usable. All of my individual systems (20+), local as well as external, use RAID1 and have done so for years. Todays harddisks, whether for desktop or server use, _will_ break. Data can be saved by backups, uptime can be saved by redundancy. RAID1 does both. Agreed, I just choose to use rsync.
-- Duaine Hechler Piano, Player Piano, Pump Organ Tuning, Servicing& Rebuilding Reed Organ Society Member Florissant, MO 63034 (314) 838-5587 dahechler@att.net www.hechlerpianoandorgan.com -- Home& Business user of Linux - 11 years -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
![](https://seccdn.libravatar.org/avatar/5ac12776af8e11a58b0b513293feb2d6.jpg?s=120&d=mm&r=g)
On 09/14/2011 12:06 PM, Duaine Hechler wrote:
When I first started in Linux and found out about RAID, I setup RAID 1.
THEN, unfortunately, the way I play with my system, I found RAID to be very detrimental because I accidentally messed up where I could not boot.
I don't want to be nosy, but may I ask what is an example of the way you were playing with your system which messed it up so you could not boot? Are you talking about a system upgrade, or something more basic? Because I am also playing with my system as I learn all these things, and I am wondering if I should avoid doing certain things. The general advice I have received is to not log in as root unless I have to, which I do try and follow. George
-- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
![](https://seccdn.libravatar.org/avatar/1dea64c351deaab589ae75daff9fe44e.jpg?s=120&d=mm&r=g)
On 09/14/2011 01:34 AM, George OLson wrote:
On 09/14/2011 12:06 PM, Duaine Hechler wrote:
When I first started in Linux and found out about RAID, I setup RAID 1.
THEN, unfortunately, the way I play with my system, I found RAID to be very detrimental because I accidentally messed up where I could not boot.
I don't want to be nosy, but may I ask what is an example of the way you were playing with your system which messed it up so you could not boot? Are you talking about a system upgrade, or something more basic? Because I am also playing with my system as I learn all these things, and I am wondering if I should avoid doing certain things.
The general advice I have received is to not log in as root unless I have to, which I do try and follow.
George Man, that's really hard to say. That was about 11 years ago. Most of it was from lack of experience, some from not being able to resolve and match the right version of dependencies. Some was being on other distros, like Mandrake, Slackware, etc which had "in progress" repros.
I must have played with 5 or 6 distros before settling on SuSE / openSuSE. At least from an opensuse stand point, with the level of intelligence of the dependency checking, its pretty hard to "screw up". Duaine -- Duaine Hechler Piano, Player Piano, Pump Organ Tuning, Servicing& Rebuilding Reed Organ Society Member Florissant, MO 63034 (314) 838-5587 dahechler@att.net www.hechlerpianoandorgan.com -- Home& Business user of Linux - 11 years -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
![](https://seccdn.libravatar.org/avatar/5ac12776af8e11a58b0b513293feb2d6.jpg?s=120&d=mm&r=g)
On 09/14/2011 12:06 PM, Duaine Hechler wrote:
When I first started in Linux and found out about RAID, I setup RAID 1.
THEN, unfortunately, the way I play with my system, I found RAID to be very detrimental because I accidentally messed up where I could not boot.
THAT meant my partner drive was also messed up.
So I took of RAID and every night I manually run "rsync" to keep the two drives in sync.
Another question - can you give me an example of what your rsync lines look like? If I were to go the same route as you (which I haven't fully decided yet), what options would I use to rsync my root directory over to the backup drive? I am thinking that rsync -avr --delete / backupdrive(meaning the proper thing here) is probably just too simple and will probably miss some things, but I do not fully understand why. I have read through the manual, and continue to study it to understand all the options, but some examples would help. Thanks George -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
![](https://seccdn.libravatar.org/avatar/247f3737bfdd07c80a5411399e9a504c.jpg?s=120&d=mm&r=g)
George OLson wrote:
Another question - can you give me an example of what your rsync lines look like? If I were to go the same route as you (which I haven't fully decided yet), what options would I use to rsync my root directory over to the backup drive? I am thinking that rsync -avr --delete / backupdrive(meaning the proper thing here)
I use /usr/bin/rsync -aHx --delete --stats --numeric-ids --exclude=tmp --exclude=Cache --exclude=.gvfs --delete-excluded / /backup/hostname/root It seems to work but I've no idea if it is optimal. Cheers, Dave -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
![](https://seccdn.libravatar.org/avatar/1dea64c351deaab589ae75daff9fe44e.jpg?s=120&d=mm&r=g)
On 09/14/2011 07:46 AM, George OLson wrote:
On 09/14/2011 12:06 PM, Duaine Hechler wrote:
When I first started in Linux and found out about RAID, I setup RAID 1.
THEN, unfortunately, the way I play with my system, I found RAID to be very detrimental because I accidentally messed up where I could not boot.
THAT meant my partner drive was also messed up.
So I took of RAID and every night I manually run "rsync" to keep the two drives in sync.
Another question - can you give me an example of what your rsync lines look like? If I were to go the same route as you (which I haven't fully decided yet), what options would I use to rsync my root directory over to the backup drive? I am thinking that rsync -avr --delete / backupdrive(meaning the proper thing here)
is probably just too simple and will probably miss some things, but I do not fully understand why. I have read through the manual, and continue to study it to understand all the options, but some examples would help.
Thanks George I come from the old school mainframe days so the attach scripts are using REXX (regina).
Between the single quotes are the rsync commands Duaine -- Duaine Hechler Piano, Player Piano, Pump Organ Tuning, Servicing& Rebuilding Reed Organ Society Member Florissant, MO 63034 (314) 838-5587 dahechler@att.net www.hechlerpianoandorgan.com -- Home& Business user of Linux - 11 years
![](https://seccdn.libravatar.org/avatar/d0edefa23f9401a724b4d56ec040432f.jpg?s=120&d=mm&r=g)
2011. szeptember 14. 5:31 napon "David C. Rankin" <drankinatty@suddenlinkmail.com> írta:
On 09/13/2011 08:13 PM, George OLson wrote:
snip
You are correct - BIOS RAID is not an option if you want to save your current install. (unless you use dd to block copy your OS off to another spare drive, then install 2 [same size] drives, set up the bios raid, boot from the install cd and use dd to reinstall your OS onto the new mirror). Yes, you can mirror all partitions (/boot, /, /home & SWAP).
David, I guess this is not correct. When you enter the fakeraid BIOS you can make an array and choose 'copy' option. This will make the array and copy the content of disk1 to disk2. Or you can make the array without copying, then boot a live linux (eg knoppix) and use dd to copy disk1 to disk2. It is not necessary to copy disk1 to disk3, make array with disks 1 and 2, and copy back the data from disk3 to the array. However I agree that it is the safest solution since you will have a complete backup. Cheers, Istvan -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
![](https://seccdn.libravatar.org/avatar/8434092a3798a0467c3f2371ef030fc6.jpg?s=120&d=mm&r=g)
On 9/14/2011 6:13 PM, Istvan Gabor wrote:
2011. szeptember 14. 5:31 napon "David C. Rankin"<drankinatty@suddenlinkmail.com> írta:
On 09/13/2011 08:13 PM, George OLson wrote:
snip
You are correct - BIOS RAID is not an option if you want to save your current install. (unless you use dd to block copy your OS off to another spare drive, then install 2 [same size] drives, set up the bios raid, boot from the install cd and use dd to reinstall your OS onto the new mirror). Yes, you can mirror all partitions (/boot, /, /home& SWAP).
David, I guess this is not correct. When you enter the fakeraid BIOS you can make an array and choose 'copy' option. This will make the array and copy the content of disk1 to disk2.
This is only true for some bios's , some cards. It's not a given.
Or you can make the array without copying, then boot a live linux (eg knoppix) and use dd to copy disk1 to disk2.
It is not necessary to copy disk1 to disk3, make array with disks 1 and 2, and copy back the data from disk3 to the array. However I agree that it is the safest solution since you will have a complete backup.
This is almost always NOT true at all. When you tell the bios to assign the drive to an array, the bios uses some of the drives space to write raid metadata and that space becomes invisible to linux, whereas before you assign the drive to any array, linux has access to the entire drive. If you start with a non raid drive, dd it to another, and THEN tell the bios to use them both in a raid1 array, on most fakeraid bios's this will result in overwriting part of your filesystem with bios raid formatting data, which means corrupting, possibly entirely destroying destroying the entire filesystem. There are advantages to hardware raid. There are advantages to software raid. Fakeraid is "the worst of both worlds". You get all the disadvantages of software raid and all the disadvantages of hardware raid, and one teeny tiny little unnecessary advantage of hardware raid, which is just that it's possible to boot from the the array, don't need a non-raid drive to boot from. I've only booted from usb thumb drives for years so I happily make all drives fully software raid-whatever-raid-level-I-want. Even on a laptop where you can't leave a thumb drive plugged in all the time, but you had more than one hard drive which is still rare if no longer unheard of, you still don't need fakeraid. You just partition a small boot partition on each drive the same way, make those a raid1 array in fully regular software raid, and the bios can boot from either copy just fine without knowing that it's part of a raid1. Even some versions of Windows have software raid so even if you wanted to dual boot with Windows and you wanted Windows to have raid too, as long as it's any of the versions with dynamic disk you still don't need fakeraid. You'd need a laptop with more than one drive and Windows 2K/XP/Vista/7 Home to actually need fakeraid. And if you were dual booting Linux on that box you'd need dmraid in Linux. -- bkw -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
![](https://seccdn.libravatar.org/avatar/b4047644c59f2d63b88e9464c02743fd.jpg?s=120&d=mm&r=g)
On 9/14/2011 4:33 PM, Brian K. White wrote:
. You just partition a small boot partition on each drive the same way, make those a raid1 array in fully regular software raid, and the bios can boot from either copy just fine without knowing that it's part of a raid1.
I've used this method for many years with Grub, but I have to tell you, you better write a script that lives in /boot which copies everything to the other, because without that the /boot partitions will drift out of sync. -- _____________________________________ ---This space for rent--- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
![](https://seccdn.libravatar.org/avatar/8434092a3798a0467c3f2371ef030fc6.jpg?s=120&d=mm&r=g)
On 9/14/2011 8:16 PM, John Andersen wrote:
On 9/14/2011 4:33 PM, Brian K. White wrote:
. You just partition a small boot partition on each drive the same way, make those a raid1 array in fully regular software raid, and the bios can boot from either copy just fine without knowing that it's part of a raid1.
I've used this method for many years with Grub, but I have to tell you, you better write a script that lives in /boot which copies everything to the other, because without that the /boot partitions will drift out of sync.
No I just make /boot a raid1. If either copy ever gets modified outside of linux, mdadmd alerts me by email the same as any other array, and it's easy to reset the array by removing re-adding the "failed" partition since it's tiny. But it never happens anyways. kernel updates write to "/boot" not /dev/anything, so the contents are always identical automatically. The only other special attention is 2 things: * I configure /etc/grub.conf to write grub to all drives mbr's or all drives /boot partitions. This way any drive can boot. Normally only one ever does boot, but the point is all about that day that drive is bad. * I configure at least one duplicate stanza in menu.lst where the only difference is the grub boot drive. No need to write one for every possible drive, if the normal hd0 is bad, booting from hd1 is all you need. Or really, I don't even bother with that because when that day comes, it's easy enough to do manually from the grub prompt. But really, I don't even do that any more. Too much work. Just stick a thumb drive somewhere and treat it like a regular simple non-raid drive. Both reads and writes are completely rare so it's no problem about wear. It can be tricky getting a given motherboard to recognize a thumb drve for booting. There are often different quirks and hoops to jump through required for each motherboard, but these days most boards can do it and the special mystery requirements about formatting and max size and bios settings are getting less and less every few months. It used to be pretty exotic and not worth the grief, but it's getting almost painless now. One of the biggest reasons I do that is so I don't have to partition the real drives at all. I just use the whole raw full drive device in mdadm, no fdisk or anything. That means that when I have 24 drives in one box, I don't have to do 24 special fdisk setups, exactly the same, correctly. That's _excrutiating in the yast interface. It's a lot better at the command line where you can re-run an identical sfdisk 24 times, not only safely reliably identical each time, but quickly too. Although it's hardly user friendly! Nor do I have to do anything when a drive goes bad. I just yank it and pop in the new one and it's one mdadm command to add the new drive to a single array. Without the usb thumb drive for /boot, I'd have to clone the partitioning scheme of one of the other drives, then tell mdadm to add each of 2 or 3 partitions to 2 or 3 different md arrays. More steps is not only more work but more down time or at least more at-risk time during problems and more mistakes. If it seems complicated or delicate, it is a little, but mostly only because things like yast don't consider this kind of arrangement yet. But there's no technical reason they couldn't and then this would all be effortless for grandma let alone me. -- bkw -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
![](https://seccdn.libravatar.org/avatar/27aacf61a13c66fcc083fcf8a84823bc.jpg?s=120&d=mm&r=g)
On 09/14/2011 09:30 PM, Brian K. White wrote:
I've used this method for many years with Grub, but I have to tell you, you better write a script that lives in /boot which copies everything to the other, because without that the /boot partitions will drift out of sync.
No I just make /boot a raid1.
Agreed. I have used /, /home, /boot, and SWAP as raid1 with both dmraid and mdraid. All partitions are mirrored. I can then shutdown and pull the power cord to one of the drives and reboot and have the system boot just fine in single-disk mode. I have heard arguments against mirroring /boot and SWAP, but I've never had a problem over the past decade. Further, I've always wondered if /boot wasn't mirror, "how in the heck would you boot without an install disk in the case of a drive failure?" (especially when the server is a remote server...) -- David C. Rankin, J.D.,P.E. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
![](https://seccdn.libravatar.org/avatar/27aacf61a13c66fcc083fcf8a84823bc.jpg?s=120&d=mm&r=g)
On 09/14/2011 05:13 PM, Istvan Gabor wrote:
David, I guess this is not correct. When you enter the fakeraid BIOS you can make an array and choose 'copy' option. This will make the array and copy the content of disk1 to disk2.
Or you can make the array without copying, then boot a live linux (eg knoppix) and use dd to copy disk1 to disk2.
It is not necessary to copy disk1 to disk3, make array with disks 1 and 2, and copy back the data from disk3 to the array. However I agree that it is the safest solution since you will have a complete backup.
Cheers,
Istvan
Hah!, I hope you are right. I haven't seen the copy option with any of the bios raid setups I have. The issue is getting to bios to recognize the filesystems on the existing disks while creating the dmraid arrays. e.g.: [06:54 archangel:/home/david] # dmraid -r /dev/sdd: nvidia, "nvidia_baaccaja", mirror, ok, 1465149166 sectors, data@ 0 /dev/sdc: nvidia, "nvidia_fdaacfde", mirror, ok, 976773166 sectors, data@ 0 /dev/sda: nvidia, "nvidia_fdaacfde", mirror, ok, 976773166 sectors, data@ 0 /dev/sdb: nvidia, "nvidia_baaccaja", mirror, ok, 1465149166 sectors, data@ 0 It's my understanding that the bios must create the array prior to dmraid assigning the dm-descriptor (e.g. "nvidia_fdaacfde") to the array. With all the bios setups I have, the process of designating 2 disks as an array in the bios will remove all current filesystem boundaries basically wiping the disk clean. Then after the array is created, you can re-partition the new array or use dd to restore a working install. Things may have gotten better by now. All my bios raid setups are at least 2 years old, so there may be new functionality. I sure hope there is because that would make setup a breeze... -- David C. Rankin, J.D.,P.E. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
![](https://seccdn.libravatar.org/avatar/d0edefa23f9401a724b4d56ec040432f.jpg?s=120&d=mm&r=g)
2011. szeptember 14. 3:13 napon George OLson <grglsn765@gmail.com> írta:
I am about to set up RAID-1 on my system, and I am looking for some experienced opinions.
snip
I have recently purchased a 1TB drive to use as the mirror image, with the intent that the 2nd 500GB on this drive will be used for other purposes, like maybe testing a new installation when it comes out, or just having extra data that I don't need on the mirror image partition. I have installed the 1TB drive and hooked it up. It is not yet partitioned.
I've been using softraid/fakeraid for a while and I am satsified with it. It saved my data at least two times when one of the disks went wrong. But I think for fakeraid you need two drives with the same capacity, or maybe even the same type. I think that you can't use fakeraid with your 1TB and 500 GB disks.
I have done some research on the internet, and I found some tech sites that said that using the RAID setup in the BIOS is easy to set up, and you can do it without having to reinstall your OS. However, other sites have said that you do have to reinstall your OS, as it will wipe the original drive. (In any case, it would have been easier if I had set up RAID before the initial install, but too late now.)
This is not correct, at least in case of some fakeraid cards/chips, eg SiI 3114, SiI 3512, maybe nvidia nvraid.
My BIOS setup only has 1 line indicating RAID, in the "IDE setup" menu, where it allows you to configure nVidia RAID as enabled or disabled. The motherboard user guide (it is an ASUS M2N68-AM SE2) doesn't give any other information.
This only enables the fakeraid BIOS showing up at boot. After turning on the computer a fakeraid BIOS message should be shown naming the fakeraid type, vesrion number etc and give information how to enter the fakeraid BIOS (eg press F10). When you enter the fakeraid BIOS you are able to make the RAID array. Until this is not done, turning on fakeraid in the computer BIOS will not affect the disks.
If I put the RAID setting to enabled, and then on my next subsequent reboot, will the BIOS would run me through a setup utility to setup RAID? If I knew for sure that it wouldn't wipe my original hard drive, I would tend to go that way, as it seems simpler. This website,
Yes, see above, the fakeraid BIOS message will show up and give info how to enter it.
"http://lifehacker.com/352472/set-up-real+time-bulletproof-backup-drive-redun...", indicated a very simple setup, but that is only 1 guy so I am skeptical if it is really as simple as he makes it out to be.
snip Google for nvraid and manual for finding info how to set up nvidia raid. One link I found: ftp://ftp.tyan.com/manuals/m_NVRAID_Users_Guide_v20.pdf The version of nvraid does matter, newer has more setting options. All in all I would buy another 1TB drive (same type) and would make the array on the disk pair. Istvan -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
participants (11)
-
Brian K. White
-
Dave Howorth
-
David C. Rankin
-
Duaine Hechler
-
Felix Miata
-
George OLson
-
Istvan Gabor
-
James Knott
-
Joaquin Sosa
-
John Andersen
-
Per Jessen