[opensuse] Hard Disk Upgrades
I have a 160GB 3.5" Disk for my swap and root partitions and it has a few bad sectors. I would like to change this disk to a 2.5" 160GB Disk I also have a 500GB 3.5" disk for my home partition that I would like to replace with 2x1TB 2.5" in RAID1 (Mirror) Does anyone have any advice on how I should go about upgrading my disks? I have not had to do this in linux before and would like to get it right first time. All I know is that cloning the disk in the case of my root drive will not work due to the disk ID but I cannot find a working guide on how to do this correctly.. I have no clue what to do for the RAID. My MoBo supports RAID, should I use this functionality or set up a software RAID with opensuse? What are the steps to replace a disk in a mirror if one was to fail? Is it a simple process like on my FreeNAS? I am using 13.1 -- Paul Groves -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Wed, Sep 10, 2014 at 5:56 PM, Paul Groves
I have no clue what to do for the RAID. My MoBo supports RAID, should I use this functionality or set up a software RAID with opensuse?
That's the first thing you need to decide. Is your MoBo raid real hardware raid or fake raid? http://serverfault.com/questions/9244/how-do-i-differentiate-fake-raid-from-... If fake raid, I would ONLY use it if I needed to be windows compatible. ie. You need to setup a dual boot windows/opensuse setup. Otherwise, if fake raid I would use pure software raid instead. If true hardware raid then I would leverage that. I think the other answers will depend on this one. Greg -- Greg Freemyer -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 2014-09-11 00:05, Greg Freemyer wrote:
On Wed, Sep 10, 2014 at 5:56 PM, Paul Groves
wrote: I have no clue what to do for the RAID. My MoBo supports RAID, should I use this functionality or set up a software RAID with opensuse?
That's the first thing you need to decide.
And another thing: in order to even consider setting up a raid, you must have the hardware for a full backup of the raid. If you don't, better use the extra disk not used on the raid for backup instead. Meaning: if you are going to setup a mirror of 2 disks, buy 3. Two for the mirror, another for the backup. If you can only afford 2 disks, then use 1 disk normally, no raid, and the other for off-line backup. Why? Because raid is no substitute for backup, and only covers one type of failure. People that set up a raid supposedly value their data a lot, but raid alone does not protect your data, and gives a false sense of security... - -- Cheers / Saludos, Carlos E. R. (from 13.1 x86_64 "Bottle" at Telcontar) -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) iEYEARECAAYFAlQQzUkACgkQtTMYHG2NR9VuIQCeOi3u1T/eg2hyGu7XmI7cliuT rN0AnRF8haIcNAljw7kIVz1waadgYDSv =M3oJ -----END PGP SIGNATURE----- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Greg Freemyer wrote:
On Wed, Sep 10, 2014 at 5:56 PM, Paul Groves
wrote: I have no clue what to do for the RAID. My MoBo supports RAID, should I use this functionality or set up a software RAID with opensuse?
That's the first thing you need to decide.
Is your MoBo raid real hardware raid or fake raid?
http://serverfault.com/questions/9244/how-do-i-differentiate-fake-raid-from-...
If fake raid, I would ONLY use it if I needed to be windows compatible. ie. You need to setup a dual boot windows/opensuse setup.
---- I disagree, *depending* on what type of RAID you want. If you want RAID0 or RAID1, my experience with BIOS raid for those 2 types is that is more versatile, reliable and software transparent. Since you want RAID1, assuming you are talking SATA or SAS, both disks can be written-to at the same time by the controller and it will look like a single disk to windows and linux (and any other OS). I've seen and had linux SW RAID5 (as well as HW), and with the same power and software interruptions, the SWRAID was more vulnerable to hard corruptions. W/SW RAID, you can't have a battery-backed up ram that will write to the disk when the OS comes up -- because the memory will be purged. As for the new disks... 1. Do them 1 at a time. Since you are going for different hard disks you would want to use a full disk backup and restore so the new disks will have a minimal amount of fragmentation. *IF*, you are using xfs...you can use xfs_dump+xfs_restore to dupe a hard disk (won't make it bootable, but for data it's fine). (don't use the xfs_copy routine, as that is basically like a "dd" but doesn't dup the diskid (i.e. no defrag or "relayout). A trivial script that should hit ~ 75% of disk throughput is one I use w/xfs (I called it xfscopy). If you don't use xfs you might be able to adopt the concepts to your fs's similar dump/restore util... Need to be root or have unimpeded sudo access. It does some runtime io & cpu prioritizing to optimize things. Just added a few runtime checks - args, privs.. but the rest I've used for ages... #xfscopy--- #!/bin/bash -u # trival fulldisk copy using xfs_{dump,restore} & mbuffer - lwalsh # sets cpu and iopriorities to optimize copy speed # $1=source # $2=target # ensure enough args if (($#!=2)) ; then echo "xfscopy needs source and target mount points" exit 1 fi PATH="/usr/sbin:/sbin:/usr/bin:/bin:$PATH" #ensure util paths are first # ensure privs (root or sudo) export sudo="$(type -P sudo)" function sudo { if (($(id -u))); then [[ ! $sudo ]] && return 1 $sudo -n -- "$@" else exec "$@" fi } export -f sudo read uid < <(sudo id -u) if [[ $uid != 0 ]]; then echo "Must have admin privs"; exit 1; fi # xfsdump ops: # -b = blocksize # -l = level (0=all) # -J = inhibit inventory update # -p = progress report every # seconds # next to last arg is '-' for stdout/in & out # last arg is for source or destination mount points mbuffer_size=1024M xfs_bs=128k xfs_report_interval=300 # setting restore proc's cpu+disk io "higher" than "dump"s helps # prevent filling memory and thrashing # prios c:1=real(don't use), 2=best-effort(timeshare); 3=idle # in Best effort, -n=0-7 where 0=highest, 7=lowest, but not strict! dump_cprio=-19 restore_cprio=-5 dump_dprio="-c3" restore_dprio="-c 2 -n3" #construct command for echo & running cmd=" nice $dump_cprio ionice $dump_dprio \ xfsdump -b $xfs_bs -l 0 -p $xfs_report_interval -J - $1 | nice -1 mbuffer -m $mbuffer_size -L | nice $restore_cprio ionice $restore_dprio \ xfsrestore -b $xfs_bs -B -F -J - $2" echo $cmd sudo bash --norc -c "$cmd" #end xfscopy -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On September 11, 2014 3:53:01 AM EDT, Linda Walsh
Greg Freemyer wrote:
On Wed, Sep 10, 2014 at 5:56 PM, Paul Groves
wrote: I have no clue what to do for the RAID. My MoBo supports RAID, should I use this functionality or set up a software RAID with opensuse?
That's the first thing you need to decide.
Is your MoBo raid real hardware raid or fake raid?
http://serverfault.com/questions/9244/how-do-i-differentiate-fake-raid-from-...
If fake raid, I would ONLY use it if I needed to be windows compatible. ie. You need to setup a dual boot windows/opensuse
setup.
---- I disagree, *depending* on what type of RAID you want. If you want RAID0 or RAID1, my experience with BIOS raid for those 2 types is that is more versatile, reliable and software transparent. Since you want RAID1, assuming you are talking SATA or SAS, both disks can be written-to at the same time by the controller and it will look like a single disk to windows and linux (and any other OS).
You and I have had different experiences. The most obvious is your statement about Linux seeing a fakeraid as a single disk. With the fakeraid I've attempted to use, Linux sees each disk and has to be made to recognize the fakeraid exists and read the raid config out of the bios, then take the responsibility of managing the disks just as it does with software raid. Fyi: fake raid works by implementing custom firmware in the bios I/o interface used during boot. Thus Windows and Linux only see a single drive during the boot process, but for Linux as soon as the normal ATA/scsi drivers kick in the bios is bypassed and the kernel has to manage the raid array itself. dmraid (as opposed to MD raid or mdraid) is the Linux package that is responsible for reading the config out of the bios and setting up the Linux kernel to manage the disks correctly. Due to that, dmraid has to have knowledge of each type of fakeraid controller and is more likely to have bugs/missing support. md raid (pure software raid) on the other hand only has to interpret its own data structures, so there is more flexibility. You seem to be talking about hardware raid which uses neither dmraid nor mdraid. I agree hardware raid in general is the best, but it normally comes with a decent price tag. The cheapest 2-disk hardware raid 1 controller I've bought was over $100 for just the card.
I've seen and had linux SW RAID5 (as well as HW), and with the same power and software interruptions, the SWRAID was more vulnerable to hard corruptions. W/SW RAID, you can't have a battery-backed up ram that will write to the disk when the OS comes up -- because the memory will be purged.
To my knowledge, battery-backed up ram is a unique feature of true hardware raid. Your mentioning it continues to make me wonder if we are talking about the same thing. Have you ever used dmraid? (again, that is different than md, mdadm, etc.) Greg -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Greg Freemyer wrote:
You and I have had different experiences. The most obvious is your statement about Linux seeing a fakeraid as a single disk. With the fakeraid I've attempted to use, Linux sees each disk and has to be made to recognize the fakeraid exists and read the raid config out of the bios, then take the responsibility of managing the disks just as it does with software raid.
Fyi: fake raid works by implementing custom firmware in the bios I/o interface used during boot. Thus Windows and Linux only see a single drive during the boot process, but for Linux as soon as the normal ATA/scsi drivers kick in the bios is bypassed and the kernel has to manage the raid array itself. dmraid (as opposed to MD raid or mdraid) is the Linux package that is responsible for reading the config out of the bios and setting up the Linux kernel to manage the disks correctly. === Are you saying that the BIOS could not provide an virtual 'HW RAID' interface where the BIOS handled/managed the HD's and doesn't use
Well, this is likely a terminology issue. In my case, the "fake raid" in the Dell BIOS shows only 1 disk to linux -- something like 'Dell Virtual Disk [00-xx]' that act like new physical units (drive C, D, etc..) It is a solution you get w/o paying extra for a RAID card but that has *no hardware acceleration*... but the BIOS presents a RAID0 or RAID1 set as 1 disk for normal interactions. Note: if I go into /sys, I can find entries corresponding to each disk under the software controller device -- but I can see those under a HW-raid device as well. It is likely that all 'fake raid' solutions are not created equal. The software-only BIOS-based raids that come with some BIOS's, may not, in some sense, be 'fake', but may be a software RAID solution that exists in the BIOS and can only manage disks in the system (no way to add on external disk(s) unless your box has multiple eSATA plugs). The things I liked -- is that it's all handled pre-boot so @boot OS's think there is one HD there, but unlike HW RAID solutions, there is no HW to do RAID[5,6,50,60..etc], so it is limited to whatever commands the HBA can do in parallel on the disks it manages (usually internally only). OTOH! -- I've seen HW-accelerated RAIDS where the OS SAW each local HD, but only accessed the 'group' where it got benefit of HW-based disk-sum calculations. That's why I answered the way I did. If the OS's don't need a separate dev driver and think they are talking to some generic sata or sas drive (although, perhaps, a bit large) -- even if there is no HW card, I'm not sure I'd call that "fake raid"... though many may. I.e. the lines, surprisingly, may be a bit blurry? the.
Due to that, dmraid has to have knowledge of each type of fakeraid controller and is more likely to have bugs/missing support.
--- There is no dmraid module needed on _these_ SW-BIOS based raid solutions. Used to be I didn't even include any of the kernel modules for any of the RAID varieties. That's what I mean by OS independence.
md raid (pure software raid) on the other hand only has to interpret its own data structures, so there is more flexibility.
--- Before I started including more modules as "options", I didn't include any linux raid related modules.
You seem to be talking about hardware raid which uses neither dmraid nor mdraid. I agree hardware raid in general is the best, but it normally comes with a decent price tag. The cheapest 2-disk hardware raid 1 controller I've bought was over $100 for just the card.
--- Well, I agree it shares elements w/a HW RAID card, but it has no HW (thus limited to RAID0 or RAID1 (I'm not sure about RAID10).
I've seen and had linux SW RAID5 (as well as HW), and with the same power and software interruptions, the SWRAID was more vulnerable to hard corruptions. W/SW RAID, you can't have a battery-backed up ram that will write to the disk when the OS comes up -- because the memory will be purged.
To my knowledge, battery-backed up ram is a unique feature of true hardware raid. Your mentioning it continues to make me wonder if we are talking about the same thing.
--- There is no need for a battery backed up cache, since what is written on disk can't be disintegrous -- incomplete, maybe but if some part of a file is written and the crash happened before the 2nd DMA did its copy, on recovery, the missing data would be ignored and used from the copy that did get written and be other copy would be written after read.
Have you ever used dmraid? (again, that is different than md, mdadm, etc.)
--- Not that I can remember. I believe it was mdraid.. dmraid would be like using (not sure how different) lvm to create a stripped or mirrored volume, no? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Thu, Sep 11, 2014 at 3:47 PM, Linda Walsh
Greg Freemyer wrote:
You and I have had different experiences. The most obvious is your statement about Linux seeing a fakeraid as a single disk. With the fakeraid I've attempted to use, Linux sees each disk and has to be made to recognize the fakeraid exists and read the raid config out of the bios, then take the responsibility of managing the disks just as it does with software raid.
---- Well, this is likely a terminology issue.
In my case, the "fake raid" in the Dell BIOS shows only 1 disk to linux -- something like 'Dell Virtual Disk [00-xx]' that act like new physical units (drive C, D, etc..)
It is a solution you get w/o paying extra for a RAID card but that has *no hardware acceleration*... but the BIOS presents a RAID0 or RAID1 set as 1 disk for normal interactions.
Note: if I go into /sys, I can find entries corresponding to each disk under the software controller device -- but I can see those under a HW-raid device as well.
It is likely that all 'fake raid' solutions are not created equal. The software-only BIOS-based raids that come with some BIOS's, may not, in some sense, be 'fake', but may be a software RAID solution that exists in the BIOS and can only manage disks in the system (no way to add on external disk(s) unless your box has multiple eSATA plugs).
The things I liked -- is that it's all handled pre-boot so @boot OS's think there is one HD there, but unlike HW RAID solutions, there is no HW to do RAID[5,6,50,60..etc], so it is limited to whatever commands the HBA can do in parallel on the disks it manages (usually internally only).
OTOH! -- I've seen HW-accelerated RAIDS where the OS SAW each local HD, but only accessed the 'group' where it got benefit of HW-based disk-sum calculations.
That's why I answered the way I did. If the OS's don't need a separate dev driver and think they are talking to some generic sata or sas drive (although, perhaps, a bit large) -- even if there is no HW card, I'm not sure I'd call that "fake raid"... though many may.
I.e. the lines, surprisingly, may be a bit blurry?
I'm not familiar with the on motherboard raid setup your describing. It seems very close to hardware raid. Also, I've paid good money for a 3-ware raid controller that could only do Raid-0 and 1. It was still hardware raid.
Fyi: fake raid works by implementing custom firmware in the bios I/o interface used during boot. Thus Windows and Linux only see a single drive during the boot process, but for Linux as soon as the normal ATA/scsi drivers kick in the bios is bypassed and the kernel has to manage the raid array itself. dmraid (as opposed to MD raid or mdraid) is the Linux package that is responsible for reading the config out of the bios and setting up the Linux kernel to manage the disks correctly.
=== Are you saying that the BIOS could not provide an virtual 'HW RAID' interface where the BIOS handled/managed the HD's and doesn't use the.
As I understand it, the fake raid I'm familiar only provides a psuedo raid interface via the bios INT13 ( http://en.wikipedia.org/wiki/INT_13H). INT13 has historically been used during the initial boot process to access disks. It is a low performance interface, but it is consistent allows for simplicity in initial boot code. As I understand it, once the true linux kernel takes over i/o to the drives, it no longer uses the INT13 interface which means for fake raid there is no longer any raid functionality provided. Thus in reality, once the boot process is done, fake raid becomes true software raid, but without all the flexibility md and mdadm offers.
Due to that, dmraid has to have knowledge of each type of fakeraid controller and is more likely to have bugs/missing support.
--- There is no dmraid module needed on _these_ SW-BIOS based raid solutions. Used to be I didn't even include any of the kernel modules for any of the RAID varieties. That's what I mean by OS independence.
If you need neither dmraid or md / mdadm, then what you have is either real hardware raid or a very much improved fake raid.
md raid (pure software raid) on the other hand only has to interpret its own data structures, so there is more flexibility.
--- Before I started including more modules as "options", I didn't include any linux raid related modules.
You seem to be talking about hardware raid which uses neither dmraid nor mdraid. I agree hardware raid in general is the best, but it normally comes with a decent price tag. The cheapest 2-disk hardware raid 1 controller I've bought was over $100 for just the card.
--- Well, I agree it shares elements w/a HW RAID card, but it has no HW (thus limited to RAID0 or RAID1 (I'm not sure about RAID10).
I've seen and had linux SW RAID5 (as well as HW), and with the same power and software interruptions, the SWRAID was more vulnerable to hard corruptions. W/SW RAID, you can't have a battery-backed up ram that will write to the disk when the OS comes up -- because the memory will be purged.
To my knowledge, battery-backed up ram is a unique feature of true hardware raid. Your mentioning it continues to make me wonder if we are talking about the same thing.
--- There is no need for a battery backed up cache, since what is written on disk can't be disintegrous -- incomplete, maybe but if some part of a file is written and the crash happened before the 2nd DMA did its copy, on recovery, the missing data would be ignored and used from the copy that did get written and be other copy would be written after read.
Have you ever used dmraid? (again, that is different than md, mdadm, etc.)
--- Not that I can remember. I believe it was mdraid.. dmraid would be like using (not sure how different) lvm to create a stripped or mirrored volume, no?b
I believe dmraid is primarily a discovery tool. It interogates the system for fakeraid controllers and it can also look at metadata blocks on the drives themselves to look for fakeraid signatures. If it determines fakeraid is in use it does it's best to determine the appropriate configuration and tells the kernel how to properly interact with the drives. Here's the list of things dmraid claims to support: $ /sbin/dmraid -l asr : Adaptec HostRAID ASR (0,1,10) ddf1 : SNIA DDF1 (0,1,4,5,linear) hpt37x : Highpoint HPT37X (S,0,1,10,01) hpt45x : Highpoint HPT45X (S,0,1,10) isw : Intel Software RAID (0,1,5,01) jmicron : JMicron ATARAID (S,0,1) lsi : LSI Logic MegaRAID (0,1,10) nvidia : NVidia RAID (S,0,1,10,5) pdc : Promise FastTrack (S,0,1,10) sil : Silicon Image(tm) Medley(tm) (0,1,10) via : VIA Software RAID (S,0,1,10) dos : DOS partitions on SW RAIDs Greg -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
I believe dmraid is primarily a discovery tool. It interogates the system for fakeraid controllers and it can also look at metadata blocks on the drives themselves to look for fakeraid signatures. If it determines fakeraid is in use it does it's best to determine the appropriate configuration and tells the kernel how to properly interact with the drives. Here's the list of things dmraid claims to support:
$ /sbin/dmraid -l asr : Adaptec HostRAID ASR (0,1,10) ddf1 : SNIA DDF1 (0,1,4,5,linear) hpt37x : Highpoint HPT37X (S,0,1,10,01) hpt45x : Highpoint HPT45X (S,0,1,10) isw : Intel Software RAID (0,1,5,01) jmicron : JMicron ATARAID (S,0,1) lsi : LSI Logic MegaRAID (0,1,10) nvidia : NVidia RAID (S,0,1,10,5) pdc : Promise FastTrack (S,0,1,10) sil : Silicon Image(tm) Medley(tm) (0,1,10) via : VIA Software RAID (S,0,1,10) dos : DOS partitions on SW RAIDs --- ok... unfortunately, the only workstation I have right now that has one of these, Dell SAS 6/iR Integrated Workstation Controller is running Win7 where it gives about 3-4X throughput on a 4disk RAID0 (did I mention that all my data and programs are backed up on my server! ;-) and those 4 disks are solid state AND I have weekly image backups of the system disk?)
Last time I expanded -- (if you can do this on Windows, why not linux?) I used cygwin to run "dd" and make a full disk copy to an HD I put in a DVD slot temporarily. then added new solid state drives, configed them in the BIOS, then booted linux-recovery to 'dd' the image from the HD to the new Virtual drive & reboot!). I guess I don't see why a driver couldn't talk to a software RAID in the BIOS -- the lsi Megaraid (0,1,10) in your list seems to describe the SAS iR... Since all the reads/writes to those disks can be done in parallel with a SAS or SATA controller, it's like a nobrainer to create a low-resource raid 0/1/10...So I've not had much question about it being able to be done... but is a ROM emulating a low level HW piece still HW?? or a SW component? (the interior sas cables come directly off the motherboard). It's usually the base option for a new workstation with HW RAID being 300-700 more... It's BASED on this experience that I give the advice I give... a more limited raid.. that only works on linux didn't seem like a 'win'... -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 9/11/2014 12:47 PM, Linda Walsh wrote:
It is a solution you get w/o paying extra for a RAID card but that has *no hardware acceleration*... but the BIOS presents a RAID0 or RAID1 set as 1 disk for normal interactions.
I believe you will find the hardware acceleration is mythical anyway. At least that's been my finding. -- _____________________________________ ---This space for rent--- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
John Andersen wrote:
On 9/11/2014 12:47 PM, Linda Walsh wrote:
It is a solution you get w/o paying extra for a RAID card but that has *no hardware acceleration*... but the BIOS presents a RAID0 or RAID1 set as 1 disk for normal interactions.
I believe you will find the hardware acceleration is mythical anyway. At least that's been my finding.
---- Compare RAID 5, 6, 50, 60 against the same using linux mdraid setups. Check your throughput as well as cpu usage. I think you'll find the mdraid version notably more taxing on your cpu. Of course on RAID0/1/10 no cpu is needed to schedule 2 async writes to each mirror and wait for them to complete. Modern I/O controllers don't write synchronously to single disks, why would they do so to mirrors? Again, John Andersen wrote:
There is one area that you have to be aware of if you put /boot on a mirrored (raid1) drive, the system will always boot from which ever disk your menu.lst says to boot from even if that is part of a MD raid. If that disk fails you have to manually switch your menu.lst to the other. (This is why I avoid /boot on MD raid.)
---- vs. using SW-BIOS RAID, you only see 1 disk for the RAID, you don't see the separate pieces -- not at boot and and not in /dev.
I've had several fake raid cards and several fake raid mother boards, and each time, I did around to find the jumper that disables the fake raid and just use the controllers as separate channels. (The cheapest of the cheap fake raid cards also fake using multiple channels, so buyer beware, your WRITES will be done serially, not in parallel on these cheap cards.).
---- If you say so... but SATA only talks to 1 disk/channel unless you use a SATA multiplexor -- which requires special drivers to work on windows and on linux -- (unlike the SAS expanders, the SATA expanders are cautioned against).
-- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 9/11/2014 6:06 AM, Greg Freemyer wrote:
If you want RAID0 or RAID1, my experience with BIOS raid for those 2 types is that is more versatile, reliable and software transparent. Since you want RAID1, assuming you are talking SATA or SAS, both disks can be written-to at the same time by the controller and it will look like a single disk to windows and linux (and any other OS). You and I have had different experiences. The most obvious is your statement about Linux seeing a fakeraid as a single disk. With the fakeraid I've attempted to use, Linux sees each disk and has to be made to recognize the fakeraid exists and read the raid config out of
I disagree, *depending* on what type of RAID you want. the bios, then take the responsibility of managing the disks just as it does with software raid.
Fyi: fake raid works by implementing custom firmware in the bios I/o interface used during boot. Thus Windows and Linux only see a single drive during the boot process, but for Linux as soon as the normal ATA/scsi drivers kick in the bios is bypassed and the kernel has to manage the raid array itself. dmraid (as opposed to MD raid or mdraid) is the Linux package that is responsible for reading the config out of the bios and setting up the Linux kernel to manage the disks correctly.
My experience has been identical to Greg's, and Software Raid has been bullet proof. There is one area that you have to be aware of if you put /boot on a mirrored (raid1) drive, the system will always boot from which ever disk your menu.lst says to boot from even if that is part of a MD raid. If that disk fails you have to manually switch your menu.lst to the other. (This is why I avoid /boot on MD raid.) I've had several fake raid cards and several fake raid mother boards, and each time, I did around to find the jumper that disables the fake raid and just use the controllers as separate channels. (The cheapest of the cheap fake raid cards also fake using multiple channels, so buyer beware, your WRITES will be done serially, not in parallel on these cheap cards.). -- _____________________________________ ---This space for rent--- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 9/10/2014 2:56 PM, Paul Groves wrote:
I have a 160GB 3.5" Disk for my swap and root partitions and it has a few bad sectors. I would like to change this disk to a 2.5" 160GB Disk
I also have a 500GB 3.5" disk for my home partition that I would like to replace with 2x1TB 2.5" in RAID1 (Mirror)
Does anyone have any advice on how I should go about upgrading my disks? I have not had to do this in linux before and would like to get it right first time.
All I know is that cloning the disk in the case of my root drive will not work due to the disk ID but I cannot find a working guide on how to do this correctly..
I have no clue what to do for the RAID. My MoBo supports RAID, should I use this functionality or set up a software RAID with opensuse?
What are the steps to replace a disk in a mirror if one was to fail? Is it a simple process like on my FreeNAS?
I am using 13.1
I just went through this with my laptop, which developed bad sectors. I had an external 2.5" inch enclosure (usb2/3). After asking the advice on the list and getting many suggestions. I decided to use Clonzilla running from a CDRom. It offered to clone and resize the partitions to use the whole drive (new drive was bigger). I chose the conservative way of just cloning, leaving un-partitioned space available space at the end of the new drive for when I next install. There was a setting in Clonezilla to be aggressive in trying to recover the bad sectors. I believe it got everything, because I knew these bad sectors were right in the middle of a Virtual machine image file, and that VM came up just fine after the move. Recap: Left bad drive in machine. But new drive in external enclosure. Cloned from internal to external aggressive recovery mode. Warning: When you put in the new drive, it will not (maynot) boot because of the way Opensuse typically uses device names for naming boot partitions in grub menu.lst and also in fstab. You will have to hand edit those two files on the NEW drive either before or after you put it into the machine as the boot drive. See: http://diggerpage.blogspot.com/2011/11/cannot-boot-opensuse-12-after-cloning... You get the drive names from There are a lot of different ways to do this, but this worked and wasn't more than mildly painful. -- _____________________________________ ---This space for rent--- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Wednesday 10 of September 2014 15:37:44 John Andersen wrote:
I just went through this with my laptop, which developed bad sectors. I had an external 2.5" inch enclosure (usb2/3).
Same here, but in my case things were more complicated. The new disk is 512B/4kB (logical/physical sector size) and the old one 512B/512B, and the external controllers would report 4kB/4kB for the new one, which produced a corrupted partition table and several other problems when I transferred it into the laptop. I had to connect the failing disk to a USB adapter and the new one to the laptop, then copy the files. -- Regards, Peter -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 11/09/14 08:37, John Andersen wrote:
On 9/10/2014 2:56 PM, Paul Groves wrote:
I have a 160GB 3.5" Disk for my swap and root partitions and it has a few bad sectors. I would like to change this disk to a 2.5" 160GB Disk
I also have a 500GB 3.5" disk for my home partition that I would like to replace with 2x1TB 2.5" in RAID1 (Mirror)
Does anyone have any advice on how I should go about upgrading my disks? I have not had to do this in linux before and would like to get it right first time.
All I know is that cloning the disk in the case of my root drive will not work due to the disk ID but I cannot find a working guide on how to do this correctly..
I have no clue what to do for the RAID. My MoBo supports RAID, should I use this functionality or set up a software RAID with opensuse?
What are the steps to replace a disk in a mirror if one was to fail? Is it a simple process like on my FreeNAS?
I am using 13.1
I just went through this with my laptop, which developed bad sectors. I had an external 2.5" inch enclosure (usb2/3). After asking the advice on the list and getting many suggestions.
I decided to use Clonzilla running from a CDRom. It offered to clone and resize the partitions to use the whole drive (new drive was bigger). I chose the conservative way of just cloning, leaving un-partitioned space available space at the end of the new drive for when I next install. [pruned]
A question, John. One of the HDDs I am replacing this coming weekend contains both installed Windows 7 Professional + partitions formatted in ntfs AS WELL as Linux partitions (ext4). The new HDD will be same size as the one it replaces. The question is- can I just run Clonezilla against the "old" HDD and it will clone both the Winodws and the Linux partitions during the one run or will I need to run Clonezilla separately against the ntfs partitions and then the Linux partitions? BC -- Using openSUSE 13.1, KDE 4.14.0 & kernel 3.16.2-1 on a system with- AMD FX 8-core 3.6/4.2GHz processor 16GB PC14900/1866MHz Quad Channel RAM Gigabyte AMD3+ m/board; Gigabyte nVidia GTX660 GPU -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
El 10/09/14 a las #4, Paul Groves escribió:
I have a 160GB 3.5" Disk for my swap and root partitions and it has a few bad sectors. I would like to change this disk to a 2.5" 160GB Disk
I also have a 500GB 3.5" disk for my home partition that I would like to replace with 2x1TB 2.5" in RAID1 (Mirror)
Does anyone have any advice on how I should go about upgrading my disks? I have not had to do this in linux before and would like to get it right first time.
Shortest path: backup data, reinstall everything, restore backup.
All I know is that cloning the disk in the case of my root drive will not work due to the disk ID but I cannot find a working guide on how to do this correctly..
My MoBo supports RAID, should I use this functionality or set up a software RAID with opensuse?
Unless you are using a very high end motherboard..the answer to that question is no. Most if not all "fakeraids" included with consumer-level motherboards are of horrendous quality.
What are the steps to replace a disk in a mirror if one was to fail? Is it a simple process like on my FreeNAS?
https://raid.wiki.kernel.org/index.php/Reconstruction -- Cristian "I don't know the key to success, but the key to failure is trying to please everybody." -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 2014-09-11 01:14, Cristian Rodríguez wrote:
El 10/09/14 a las #4, Paul Groves escribió:
My MoBo supports RAID, should I use this functionality or set up a software RAID with opensuse?
Unless you are using a very high end motherboard..the answer to that question is no. Most if not all "fakeraids" included with consumer-level motherboards are of horrendous quality.
And very possibly, if you have to replace the motherboard, you can not reuse the raid disks as they are. You typically need to reconstruct from a backup. Performance wise, fake raid is basically a software raid (runs on the main cpu) with some assistance from the board for reading, used during boot till drivers are loaded. So, if it runs in the CPU, better just use Linux software raid, which is way more flexible, and does not depend on a particular hardware. -- Cheers / Saludos, Carlos E. R. (from 13.1 x86_64 "Bottle" at Telcontar)
On Wed, 10 Sep 2014 14:56:54 -0700 (PDT)
Paul Groves
I have a 160GB 3.5" Disk for my swap and root partitions and it has a few bad sectors. I would like to change this disk to a 2.5" 160GB Disk
I also have a 500GB 3.5" disk for my home partition that I would like to replace with 2x1TB 2.5" in RAID1 (Mirror)
Does anyone have any advice on how I should go about upgrading my disks? I have not had to do this in linux before and would like to get it right first time.
All I know is that cloning the disk in the case of my root drive will not work due to the disk ID but I cannot find a working guide on how to do this correctly..
I have no clue what to do for the RAID. My MoBo supports RAID, should I use this functionality or set up a software RAID with opensuse?
What are the steps to replace a disk in a mirror if one was to fail? Is it a simple process like on my FreeNAS?
I am using 13.1
I won't comment on the raid questions as I don't have knowledge there. That said, how are the disks mounted in /etc/fstab? If by UUID then you will probably need to re-install OS on the new drives and reload your data (you did back that up right?). Replacing a drive is MUCH easier if /etc/fstab mounts by "label" then you can temporarily mount both drives with the new drive having a different label during this phase. Then just do a forced copy (cp -af /src /dest), remove the old drive, and rename the new drive label to what the old drive had. (You did backup just in case, right?). Drives can be "labeled" with either es2fsck or parted. See man pages for usage of either. Tom -- Life takes on meaning when you become motivated, set goals and charge after them in an unstoppable manner. -Les Brown ^^ --... ...-- / -.- --. --... -.-. ..-. -.-. ^^^^ Tom Taylor KG7CFC openSUSE 13.1 (64-bit), Kernel 3.11.6-4-default, KDE 4.11.2, AMD Phenom X4 955, GeForce GTX 550 Ti (Nvidia 337.19) 16GB RAM -- 3x1.5TB sata2 -- 128GB-SSD FF 27.0, claws-mail 3.10.0 registered linux user 263467 -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
participants (10)
-
auxsvr@gmail.com
-
Basil Chupin
-
Carlos E. R.
-
Carlos E. R.
-
Cristian Rodríguez
-
Greg Freemyer
-
John Andersen
-
Linda Walsh
-
Paul Groves
-
Thomas Taylor