I'd piecemeal it... Get a PC w/a PCIe x16 or more slot and put an LSI MegaRaid 9286CV-8e. It has 8 external 6Gb channels. Have it talking to two (obsolete) LSI DE1600-SAS 12x3.5 enclosures, and 1 Areca ARC-8026 (loud fans), 24x3.5 enclosures. They LSI controller will be a bit finicky on disks it will accept as "good"...if you want good performance, you'll want to get Enterprise level drives. I put 24 4tB Hitachi Ultrastars in them about 5-6 years ago. Have them setup as a RAID10, so only 48TB available storage, but if bought today, 6TB disks might be better buy. Since the PC would be running linux, you can export the disks as SMB(CIFS), NFS or iSCSI. I found SMB gave me 125mB (thats 125mill) writes and 119mB reads over a 1G connect. Going to 10G transfers start to be cpu limited under smb (dunno about iSCSI or NFS), but right now, get: h> bin/iotest Using bs=16.0M, count=64, iosize=1.0G R:1073741824 bytes (1.0GB) copied, 1.88318 s, 544MB/s W:1073741824 bytes (1.0GB) copied, 4.95738 s, 207MB/s (those are base2 prefixes, so about 584mB read and 222mB writes for decimal). The sas enclosures can be daisy chained for up to 8 enclosures (might be more), so could have 192tB of storage w/4tB disks. Dunno if that is what you were thinking of or not? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org