Thorsten Bro wrote:
Hi Per,
Am 25.06.19 um 21:27 schrieb Per Jessen:
Kurt Garloff wrote:
In the end, running this on an old SUSE-donated machine with a lot of spinning slow disks, is not matching the performance needs of the machine. So there need to be a final and well-engineered solution for this. Which was, until now, always out of scope of any budget.
Does someone have a good understanding what the needs would be?
With a 1Gbit uplink, it takes about 100Mbyte/s to saturate. Thorsten ran some numbers in April showing the actual usage is a less than half that. Daily 200-300Mbit/s with peaks at up to 500Mbit/s, so 50Mbyte/s.
Being fairly conservative, I think any modern machine with one array controller (Adaptec 1000 for instance) with 8 drives in RAID6 should easily do the job. Assuming we get e.g. 10Mbyte/s from each drive which should be well within reach even with our random-access pattern.
I go with your calculation, but we had there 2 system disks in a hardware RAID and 6 x 4TB disk in a software MD-RAID in RAID5 mode.
Ah, thanks for adding that, I didn't know the current config.
So we are like around 50-60MByte/s if we reach to the top :) with this software RAID setup. Furthermore, we have a 1GBit downlink (with a specific route) to download new contents from stage.o.o and getting fired from repopusher as well via this road. Plus we have the rsync-clients on another interface with 1GBit which are downloading parallely. Yes, the stats showed something ~500MBit/s but per interface.
I'm surprised we can even push that much from stage.o.o - so we should try to look at running with 50MByte in and 50Mbyte out, per second. Writing 50Mbyte/s on RAID5 or 6 - with hardware support, that's also fine.
And that's why I saw a lot of times "stat D" on the rsync processes. So we were stuck, because of disk performance. I would like to SSDs there - which are super expensive with 4TB and more. And plus, we are also running out of space, because with the current setup we planned for 19 TB (back then 15TB all repos), now we peak, ~1 year later~ around 18.51TB on stage.o.o.
I tried to balance speed, size and cost. With cost-effective hardware (=cheaper spinning drives), our desired speed is easily achievable, and we also get more space for less money. Room to grow. I specifically say RAID6 because of the time needed to recover a +4Tb drive - big drive = longer recovery time = longer risk.
All this needs a hardware upgrade - unfortunately.
Do you think that are reasonable numbers? What size should we expect? 30TB? or even more?
More :-) pontifex has about 20Tb right now and is 94% full, it would make sense to go for 30Tb or more. Thinking out loud - comments very welcome - I would go for a storage server from e.g. Thomas Krenn, with IPMI, redundant power supplies. One of those are EUR2000-2500, depending on spec. A 3U box has room for 16 drives, which gives us room to expand. I would want two smaller 2.5" SSDs in RAID1 for booting from, they're cheap, and then start with 8 x 8Tb drives (~ 48Tb). Seagate Enterprise drives are EUR300-350 apiece, I think. Add a hardware RAID controller, EUR500. So, for about EUR5000 we would have a system that is reliable, and has plenty of room to grow. We could probably order it custom built from TK. When we run out of space in a couple of years, it would be easy to add another controller and migrate from one to the other (with more disk space). -- Per Jessen, Zürich (23.3°C) Member, openSUSE Heroes -- To unsubscribe, e-mail: heroes+unsubscribe@opensuse.org To contact the owner, e-mail: heroes+owner@opensuse.org