On Wednesday 03 May 2017, pit wrote:
Hi Rudi,
Ruediger Meier wrote:
Raid5 is usually a bad choice for writing, especially with many small files, and especially with that many disks. You may google for "read5 write penalty".
Write speed by itself is fine, it can definitely write faster than our system can deliver data for it (bonnie++ puts it at 1.5GB/s, normal workload is 500-600MB/s).
Sequential is fast on raid5, there is no write penalty if you write a whole stripe. BTW 1.5GB/s is nothing against what you should expect if you sum-up 16x SSD speed. I guess SSD was waste of money in your case.
The fact that you are using such expensive SSDs indicates that you want performance. Maybe Raid10 should be the better choice.
Mostly for storage space, plus some heat concerns with disks. But RAID10 would waste too much space...
Regarding the costs. You are using 16 fast, large and expensive (but still not enterprise!) 4T SSDs on one of the cheapest possible mainboards. I think even in theory your raid array can't be faster than a raid10 array of cheap rotating (but certified enterprise) disks.
Another thing hardware controllers may disable the write cache of your HDs by default.
Thanks for the hint - I'll investigate that.
Now I know that XFS is not the fastest for this operation, BUT: The computer has an 'emergency RAID set', in case we run out of space. It is a 6x6TB HDD RAID5, connected to the mainboards SATA ports. Also mdadm RAID with XFS. On this (in general performance much slower) 28TB RAID, the same dataset gets deleted in around 2 minutes.
AFAIR such benchmarks are only comparable if both file systems have the same size and content, or even better they are both empty, newly created. I remember that I could never reproduce my old measurements after the file system was in heavy use for some months.
Sure, but a factor of 20 difference is somewhat difficult to explain...
Very interesting. Are your SSDs officially supported by your controller? Professional controllers have usually a list of certified HD models and on the other hand they have usually more incompatibility issues than mainstream hardware. I would contact the vendor and ask about known issues.
I guess that mostly applies if you use the HW RAID of the cards - we only use them as 'SATA port multipliers'....
No I've seen HDs which did not worked at all on a particular controller and even worse HDs which worked unstable, regardless of raid level. I've also had issues with enterprise controllers on consumer mainboards. In my cases I could solve the problems with firmware updates for controller and HDs. But this was luck. On the other hand I've never had incompatibilities with any HD on cheap onboard controllers. So I've learned my lesson. I don't mix enterprise and consumer hardware. If I would build such an expensive storage like you I would only combine *certified* combinations of mainboard,controller,HDs and operating system. cu, Rudi -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org