Hi Per, Am 25.06.19 um 21:27 schrieb Per Jessen:
Kurt Garloff wrote:
In the end, running this on an old SUSE-donated machine with a lot of spinning slow disks, is not matching the performance needs of the machine. So there need to be a final and well-engineered solution for this. Which was, until now, always out of scope of any budget.
Does someone have a good understanding what the needs would be?
With a 1Gbit uplink, it takes about 100Mbyte/s to saturate. Thorsten ran some numbers in April showing the actual usage is a less than half that. Daily 200-300Mbit/s with peaks at up to 500Mbit/s, so 50Mbyte/s.
Being fairly conservative, I think any modern machine with one array controller (Adaptec 1000 for instance) with 8 drives in RAID6 should easily do the job. Assuming we get e.g. 10Mbyte/s from each drive which should be well within reach even with our random-access pattern.
I go with your calculation, but we had there 2 system disks in a
hardware RAID and 6 x 4TB disk in a software MD-RAID in RAID5 mode.
So we are like around 50-60MByte/s if we reach to the top :) with this
software RAID setup.
Furthermore, we have a 1GBit downlink (with a specific route) to
download new contents from stage.o.o and getting fired from repopusher
as well via this road. Plus we have the rsync-clients on another
interface with 1GBit which are downloading parallely. Yes, the stats
showed something ~500MBit/s but per interface. And that's why I saw a
lot of times "stat D" on the rsync processes. So we were stuck, because
of disk performance.
I would like to SSDs there - which are super expensive with 4TB and
more. And plus, we are also running out of space, because with the
current setup we planned for 19 TB (back then 15TB all repos), now we
peak, ~1 year later~ around 18.51TB on stage.o.o.
All this needs a hardware upgrade - unfortunately.
Do you think that are reasonable numbers? What size should we expect?
30TB? or even more?
Best regards,
Thorsten
[1] https://mirrors.opensuse.org/list/rsyncinfo-stage.o.o.txt
--
Thorsten Bro