On Mon, 2009-07-06 at 08:42 -0700, Lew Wolfgang wrote:
Hi Folks, I've been working on a project for several years now that requires lots of disk space. We've got three servers, each with 40 directly connected 1-TB SATA disks. Now, the project is expanding and will require as many as 98 directly connected disks. I know that I can get the hardware and RAID (JBOD) controllers to mechanize this, but I'm not sure about the OS. How many individual disk (/dev/sdxx) drives can Linux support?
Last number I saw was 2,304. But I think that kind of architecture (and even your number) is unmanageable/unsustainable. Your going to need some high-end controllers to get near that number.
BTW, we need directly connected disks because of the bandwidth limits that NFS throws up. I'd be happy to listen to alternatives.
Fiber channel? Although I really suspect that if you tested iSCSI you'd find the bandwidth sufficient (I've just heard the cannot-do-it-due-to-bandwidth allot and found it to be very rarely true). There is really no way to introduce HSM (hierarchical storage management) into your app? If the answer is no, I don't mean to be harsh, but your application is effectively broken (it certainly will stop eventually). Maybe use DASD for recent/active data-sets and migrate less active data-sets to iSCSI attached storage? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org