I was kinda surprised.. I enabled hardware JBOD on an ASUS motherboard, built a volume and fired up Linux... It still saw two separate disks, and not the built volume.. Can this be worked around, or will I be forced to use software md? Anders.
JBOD == Just a bunch of disks.
Did you mean hardware RAID?
On 5/15/06, Anders Norrbring
I was kinda surprised.. I enabled hardware JBOD on an ASUS motherboard, built a volume and fired up Linux... It still saw two separate disks, and not the built volume.. Can this be worked around, or will I be forced to use software md?
Anders.
-- Check the headers for your unsubscription address For additional commands send e-mail to suse-linux-e-help@suse.com Also check the archives at http://lists.suse.com Please read the FAQs: suse-linux-e-faq@suse.com
-- Greg Freemyer The Norcross Group Forensics for the 21st Century
Greg Freemyer skrev:
JBOD == Just a bunch of disks.
Exactly.. Like a LVM, but in hardware..
Did you mean hardware RAID?
Nope, that I use Adaptec and LSI SCSI RAID adapters for.. :) Anders
On 5/15/06, Anders Norrbring
wrote: I was kinda surprised.. I enabled hardware JBOD on an ASUS motherboard, built a volume and fired up Linux... It still saw two separate disks, and not the built volume.. Can this be worked around, or will I be forced to use software md?
Anders.
On 5/15/06, Anders Norrbring
Greg Freemyer skrev:
JBOD == Just a bunch of disks.
Exactly.. Like a LVM, but in hardware..
Did you mean hardware RAID?
Nope, that I use Adaptec and LSI SCSI RAID adapters for.. :)
Anders
Anders, Do you have a url/reference that describes an ASUS JBOD as having LVM-like functionallity. I do a lot with storage, and normally JBOD means zero intelligence. ie. You can buy disk shelves that support raid, or they can be JBOD. In the the JBOD case, all the shelf is handling is the scsi-bus (or whatever) and the power. Zero intelligence. Greg -- Greg Freemyer The Norcross Group Forensics for the 21st Century
Greg Freemyer skrev:
On 5/15/06, Anders Norrbring
wrote: Greg Freemyer skrev:
JBOD == Just a bunch of disks.
Exactly.. Like a LVM, but in hardware..
Did you mean hardware RAID?
Nope, that I use Adaptec and LSI SCSI RAID adapters for.. :)
Anders
Anders,
Do you have a url/reference that describes an ASUS JBOD as having LVM-like functionallity.
I do a lot with storage, and normally JBOD means zero intelligence. ie. You can buy disk shelves that support raid, or they can be JBOD. In the the JBOD case, all the shelf is handling is the scsi-bus (or whatever) and the power. Zero intelligence.
Greg
Yep, got that info too... I guess I was just too optimistic about the nVidia functionality. It's a software-only solution, so I'll go with EVMS and LVM2 in this case. Anders.
I am currently dealing with EXACT same problem...... Best as I can tell (can't get any answers from irc) it MUST be software RAID on 2.6.x kernels. Trying to find information on SuSE portal is not an easy task. The error message I believe is refering to this page. http://support.novell.com/techcenter/sdb/en/2004/04/91_fakeraid.html If it is INDEED true, why don't they just say right at the error message, or give the exact URL. So my question, is whether or not a doze box with supplied drivers will operate faster that my SuSE with software RAID. I'm building this box to replace a NT4 server with same RAID (MB, SATA, & Driver) on a 100 mbit network. This new server is amd_64 asus a8n-e w/ 2 250 gig sata2 drives that will be pugged into a gigabit network solely for the purpose of providing a SAMBA share. Will the software RAID be fast enough for SAMBA share on gigabit network with a dozed full time apps getting their data realtime from server? I guess if it is not, then, there are 2 questions. Is there a TRUE high perfomance hardware RAID that IS supported? or (depending on above answer) Would I be better of to mirror drives with rsync after hours.... B-) P.S. And no, this ISN'T a hijack. It more information for same or at least similar problem for more luck in getting answers for OP as well as me. On Monday 15 May 2006 10:52 am, Anders Norrbring wrote:
I was kinda surprised.. I enabled hardware JBOD on an ASUS motherboard, built a volume and fired up Linux... It still saw two separate disks, and not the built volume.. Can this be worked around, or will I be forced to use software md?
Anders.
-- Check the headers for your unsubscription address For additional commands send e-mail to suse-linux-e-help@suse.com Also check the archives at http://lists.suse.com Please read the FAQs: suse-linux-e-faq@suse.com
Brad Bourn wrote:
Will the software RAID be fast enough for SAMBA share on gigabit network with a dozed full time apps getting their data realtime from server?
I suspect only a benchmark will tell.
I guess if it is not, then, there are 2 questions.
Is there a TRUE high perfomance hardware RAID that IS supported?
Plenty. Current popular manufacturers are e.g. 3ware for their SATA RAIDs, but also older ones such as e.g. Compaq, Adaptec, IGP, Mylex etc. have good support. /Per Jessen, Zürich
On Monday 15 May 2006 18:10, Brad Bourn wrote:
I am currently dealing with EXACT same problem......
Best as I can tell (can't get any answers from irc) it MUST be software RAID on 2.6.x kernels.
Trying to find information on SuSE portal is not an easy task. The error message I believe is refering to this page.
http://support.novell.com/techcenter/sdb/en/2004/04/91_fakeraid.html
If it is INDEED true, why don't they just say right at the error message, or give the exact URL.
So my question, is whether or not a doze box with supplied drivers will operate faster that my SuSE with software RAID.
I'm building this box to replace a NT4 server with same RAID (MB, SATA, & Driver) on a 100 mbit network. This new server is amd_64 asus a8n-e w/ 2 250 gig sata2 drives that will be pugged into a gigabit network solely for the purpose of providing a SAMBA share.
Will the software RAID be fast enough for SAMBA share on gigabit network with a dozed full time apps getting their data realtime from server?
I guess if it is not, then, there are 2 questions.
Is there a TRUE high perfomance hardware RAID that IS supported?
or (depending on above answer)
Would I be better of to mirror drives with rsync after hours....
I have never yet seen a motherboard with true on-board hardware RAID. I have however just setup a software RAID 5 solution using suse 10.1. Watching 'top' whilst hammering the box, gigabit Ethernet + a couple of Xen VMs I've never seen the CPU utilisation for the MD RAID Daemon exceed 1.3% of CPU resource, typically it's 0.3 to 0.7%, I can live with that. Alternatively I think the 3Ware RAID controllers are pretty well supported by Linux although most of these are 64Bit PCI and won't fit most desktop motherboards. David -- David Bottrill david@bottrill.org www.bottrill.org Registered Linux user number 330730 Internet Free World Dialup: 683864
Brad Bourn wrote:
I am currently dealing with EXACT same problem......
Best as I can tell (can't get any answers from irc) it MUST be software RAID on 2.6.x kernels.
AFAIK there is no hardware raid chip on Asus motherboards, so the raid functionality is provided entirely by the driver.
Trying to find information on SuSE portal is not an easy task. The error message I believe is refering to this page.
http://support.novell.com/techcenter/sdb/en/2004/04/91_fakeraid.html
If it is INDEED true, why don't they just say right at the error message, or give the exact URL.
So my question, is whether or not a doze box with supplied drivers will operate faster that my SuSE with software RAID.
It should not differ very much. It is more a question of preference and if you want to shell out the money for the server license and the client access licenses. We have some networked appliances that store their data on mounted network drives. Some of them really don't like it if there is even a short disruption in the network connection. My advice is to also make sure you have a reliable connection all the way from the network card to the cable up to the switch.
I'm building this box to replace a NT4 server with same RAID (MB, SATA, & Driver) on a 100 mbit network. This new server is amd_64 asus a8n-e w/ 2 250 gig sata2 drives that will be pugged into a gigabit network solely for the purpose of providing a SAMBA share.
Will the software RAID be fast enough for SAMBA share on gigabit network with a dozed full time apps getting their data realtime from server?
I guess if it is not, then, there are 2 questions.
Is there a TRUE high perfomance hardware RAID that IS supported?
If you can wait a few days I might give you an answer. I ordered a new workstation with an Asus A8N32-SLI DeLuxe motherboard, an Areca ARC-1220 8-Port RAID controller and a bunch SATA hdds. I will compare the raid 1/5 with the software (windows) to the hardware raid (1/5/6) from the Areca controller. With a bit of luck I can give you an answer within the week.
or (depending on above answer)
Would I be better of to mirror drives with rsync after hours....
The question is rather how much downtime can you afford and can you sync the data if some files are kept open and locked by the apps. If you can't afford downtime you need hotplug and hardware raid. It doesn't cost that much these days. Sandy -- List replies only please! Please address PMs to: news-reply2 (@) japantest (.) homelinux (.) com
Sandy Drobic skrev:
Brad Bourn wrote:
I am currently dealing with EXACT same problem......
Best as I can tell (can't get any answers from irc) it MUST be software RAID on 2.6.x kernels.
AFAIK there is no hardware raid chip on Asus motherboards, so the raid functionality is provided entirely by the driver. [8<]
This particular mobo carries a nVidia chipset with RAID-0/1 and JBOD... Anders.
[8<]
This particular mobo carries a nVidia chipset with RAID-0/1 and JBOD...
Anders. Anders Just remember that JBOD stands for just a bunch of drives an has no raid funtions as far as I know. You need to set up in raid 0 or raid 1. I'm not using any of the raid ports on my asus boards at This time but am using an have had pretty good luck with all the 3ware cards I have bought for setting up raid. I waiting on some drives bays to come in an then I'm going to try out the raid ports on my new A8N-SLI DLX from asus in few days. Should have the drive bays in wednesday an my boss should have the computer ready buy then I hope or I will build it myself.
jack
Jack Malone skrev: >> [8<] >> >> This particular mobo carries a nVidia chipset with RAID-0/1 and JBOD... >> >> Anders. > Anders Just remember that JBOD stands for just a bunch of drives an has > no raid funtions as far as I know. You need to set up in raid 0 or raid > 1. I'm not using any of the raid ports on my asus boards at This time > but am using an have had pretty good luck with all the 3ware cards I > have bought for setting up raid. I waiting on some drives bays to come > in an then I'm going to try out the raid ports on my new A8N-SLI DLX > from asus in few days. Should have the drive bays in wednesday an my > boss should have the computer ready buy then I hope or I will build it > myself. > > jack Thanks Jack, but I'm not looking to set up a RAID, but just a JBOD.. :) I use Adaptec and LSI SCSI RAID for RAIDs, never IDE or SATA. Anders.
On Mon, 2006-05-15 at 18:52 +0200, Anders Norrbring wrote:
I was kinda surprised.. I enabled hardware JBOD on an ASUS motherboard, built a volume and fired up Linux... It still saw two separate disks, and not the built volume.. Can this be worked around, or will I be forced to use software md?
99.9% of on-board mainboard ATA is _fake_ RAID (FRAID). Other than 16-bit BIOS Int13h Disk Services for boot, it is implemented in 100% _software_ (in the 32/64-bit driver). Because this RAID logic is licensed from a 3rd party, it is _never_ Open Source / GPL. That's why you need to load a "binary only" driver from the vendor. The 2 workarounds are ... - Legacy: GPL ataraid + hptraid/pdcraid/silraid, and - LVM2: GPL Device Mapper 2 (DM2) The legacy approach is a generic "ataraid" logic core that is paired with vendor-specific interfaces. This is largely unreliable and has always netted me toasted data. ;-> The newer approach is to use enhancements to Device Mapper 2 (DM2) in the Logical Volume Manager 2 (LVM2) that can "read" the proprietary organization of various FRAID cards and leverage enhanced LVM2's built-in spanning, striping, mirroring and, possibly, even parity (depend on the card). As a side benefit, LVM2+DM2 can also read several true hardware RAID volumes. E.g., LVM2+DM2 3Ware RAID-0, 1 and 10 volumes, in case your 3Ware card (and its on-board ASIC+firmware) dies. For more on how "FRAID" differs from _real_, hardware RAID, see my 2004 April column in Sys Admin. There are also some related articles in my Blog on FRAID v. true microcontroller and/or ASIC-driven hardware RAID. -- Bryan J. Smith Professional, technical annoyance mailto:b.j.smith@ieee.org http://thebs413.blogspot.com ----------------------------------------------------------- Americans don't get upset because citizens in some foreign nations can burn the American flag -- Americans get upset because citizens in those same nations can't burn their own
participants (8)
-
Anders Norrbring
-
Brad Bourn
-
Bryan J. Smith
-
David Bottrill
-
Greg Freemyer
-
Jack Malone
-
Per Jessen
-
Sandy Drobic