On 05/03/2017 07:39 AM, Peter Suetterlin wrote:
Hi,
I think there are some big-data guys around here, so I thought I look if someone has a clue on this:
I'm running a large (55TB) RAID5 set for our data acquisition system. It's 16 4TB SSDs (Samsung 850 EVO) connected to two LSI MegaRAID SAS-3 3008 cards sitting in an Asus Z170-deluxe mainboard. Disks are in JBOD mode, RAID is formed via mdadm. Filesystem is XFS.
In general it is a very nice system, but there is one ununderstandable thing: some of our cameras generate data in single files (~700k/file), and collects the files in a single directory, at 36files/s. So it is ending up with a lot of files.
The problem arrives when the data are to be deleted: Doing an rm -rf on a 700GB directory tree (several cameras, several runs, so the data is typically split in some 30-40 subdirectories) takes around 40 MINUTES.
Hi Peter, FWIW we also have a requirement to write lots of data. We use systems with SuperMicro X10DRH-iT motherboards, AVAGO (LSI) MegaRAID SAS 9361-8i RAID controllers, and two RAID-6 arrays consisting of eleven-each 6T Seagate ST6000NM0095 spinning drives configured with two dedicated hot-swap spares, in 4U SuperMicro chassis. We also use a two-SSD RAID-1 mirror for the operating system, running from the same RAID controller. We normally write thousands of 4-GB files and get about 1.6-GB/sec write rates, but I just set up a test writing 1-TB worth of 1-MB files and got a rate of about 1.5-GB/sec. I then sorted the files into nine directories and timed a "rm -r" on the lot and got 33.7-seconds. From previous experience with mdraid we found it gives significantly less performance than using hardware RAID. The controllers we use support hardware RAID-6 directly. Also note that we don't use RAID-5 due to the single-drive-failure-during-rebuild issue. We can't afford to loose any data. We haven't had a single problem with XFS. IIRC we tested with EXT4 once and found XFS to be just a bit faster. But that was years ago and memory is fading. I do remember a fatal problem with BTRFS though. It would crash, shred, and burn when writing more than 16-TB in a single partition. That also was years ago. Is there a way for you to test hardware RAID? Regards, Lew -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org