I had read this somewhere, However it still doesn't explain why suse does not address the raid array as one 300gb drive, and other distro will. I think having a an sdX style is actually benificial, I understand that sata is more ide like than scsi, but It makes it very confusing when you start adding more devices (ie ide chan 1 now becomes hde...) Thanks for the clarification tho. By the way, I get ioctrl errors showing up with this mobo, perhaps this is more a problem with sata_via driver? -----Original Message----- From: Stefan Fent <sf@suse.de> Sent: Thu, 1 Jul 2004 13:34:22 +0200 To: suse-amd64@suse.com Subject: Re: [suse-amd64] Raid 0 + ReiserFS * Joel Wiramu Pauling <aenertia@aenertia.net> [040701 09:00]:
I won't rule out the possibility that it was somthing to do with the sata_via kernel module. Also in the past I have noted that alot of people have wierd expereinces with the way which suse addresses the sata/raid of this chip Suse uses /dev/hdx to address the sata drives. Wheras all other distro's use sd layout. I had to not setup an array in the controller bios, and do it entirely through linux for it to work. Because if I setup the raid arrya first in the controller bios (stripe both discs full), then suse would still insist on recognising them as two seperate /dev/hdX...
This was changed in 2.6.x. vanilla 2.6.7 also uses hdX. "FakeRAID" doesn't work anymore on 2.6.x
Because I have heard other people with the board and setup have sucsesfully got other distros (i.e mandrake 10) to recognise the bios setup raid array as an sdX device. I beleive that suse is to blame partly.
No. SuSE is using the default. You can easily achieve the sdX-style by commenting the chip in /usr/src/linux/drivers/ide/pci/generic.[h,c]
I would like comment on this.
Like I said I have posted extensively on the setup of this system in suse, and the way which I was using it was the only configuration that worked...
Kind regards
Joel
-----Original Message----- From: "Mike Tierney" <miket@marketview.co.nz> Sent: Thu, 1 Jul 2004 16:13:15 +1200 To: <suse-amd64@suse.com> Subject: RE: [suse-amd64] Raid 0 + ReiserFS
I've been running SLES 8 under Raid 0 on Reiserfs at home for a few months now with no problems but then all I do with my home Linux installation is played Neverwinter Nights or tunnel into work via vmware, SSH and PCAnywhere.....
You could always use LVM to stripe your reiserfs, thats what I use (under SLES 8 AMD-64) for our production database server. And yes of course the striped ResierFS is mirrored!
BTW, which exact version of SuSE Linux are you having this problem with??
The ResierFS faq ( http://www.namesys.com/faq.html )does mention these issues with _earlier_ kernels.
"Q. Can I use ReiserFS with the software RAID?"
"A. Yes, for all Raid levels using any Linux >= 2.4.1, but DO NOT use Raid5 with Linux 2.2.x. Our journaling and their Raid code step on each other in the buffering code. Also, mirroring is not safe in the 2.2.x kernels because online mirror rebuilds in 2.2.x break the write ordering requirements for the log. If you crash in the middle of an online rebuild, your meta-data may be corrupted. The only Raid level that is safe with ReiserFS in the 2.2.x kernels is the striping/concatenation level."
Anyway, if you store any important data on Raid 0 and don't have super-tight backups you are asking for a beating IMO.
I would take this problem to the ReiserFS guys (www.namesys.com) as they take any reported problems VERY seriously.
-----Original Message----- From: Joel Wiramu Pauling [mailto:aenertia@aenertia.net] Sent: Thursday, 1 July 2004 3:29 p.m. To: suse-amd64@suse.com Subject: [suse-amd64] Raid 0 + ReiserFS
Hi There,
I have a big Warning,
Don't use raid 0 .and reiserfs. bad Bad BAd BAD!. I had been noticing strange errors for the last two months (certain files reporting 0 length and being unable to be removed). And I ran a single user session and rebuilt the reiser tree.
This corrected all the issues, but, created more in the process. Basically I have been left with holes all over my filesystem, in a fairly random order. I believe reiserfs 3.6 can't handle the stripe nature of software raid, and screws up where things are because of this. There is absolutely nothing wrong with the discs physically. And i'm pretty certain it's just the combination of reiserfs and raid0.
Anyway. The fact is I've lost a lot of data. Suse people... you need to change the default type of filesystem for raid stripes.... This progressive fall to doom will be happening to a number of people with suse installed and a raid 0 stripe.
I've posted to the reiserfs guys aswell. Currently I'm trying to rescue what I can but because the problems are random, even taring and sending it across the network to a temporary machine may only be moving the errors in much of the cases.
Kind regards
Joel
-- Check the List-Unsubscribe header to unsubscribe For additional commands, email: suse-amd64-help@suse.com
-- Check the List-Unsubscribe header to unsubscribe For additional commands, email: suse-amd64-help@suse.com
-- Check the List-Unsubscribe header to unsubscribe For additional commands, email: suse-amd64-help@suse.com
-- Stefan Fent SuSE Linux AG, Maxfeldstr. 5, D-90409 Nuernberg GPG fingerprint = B226 E3DA 37B0 2170 7403 D19C 18AF E579 9161 4BBC -- Check the List-Unsubscribe header to unsubscribe For additional commands, email: suse-amd64-help@suse.com