[opensuse-buildservice] Increasing the throughput of Open Build Service (OBS) building packages in obs with fusion io pci-e drive card
Hello, I see about 110 jobs on obs with a job time greater than 2 or more hours processing time.[1] on Open Build Service (OBS) Reason for asking. Improve the packaging times during busy times on (OBS). Some questions. - Has SSD fusion io pci-e drive card ever been tried on obs - Would a SSD fusion io pci-e drive help shrink down the processing time on all obs jobs, therefore improving obs job turnaround times. - I know it costs for a card [Depends what capacity/model] - I saw some pci-e drive Manufacturer refurbished cards on ebay[2] Cheers Glenn Demo at [3] [1] https://build.opensuse.org/monitor/old [2] http://www.ebay.com/sch/Computers-Tablets-Networking-/58058/i.html?LH_BIN=1&_nkw=ioDrive&_sop=15 [3] Demo http://www.youtube.com/watch?v=9J5xGwdmsuo -- To unsubscribe, e-mail: opensuse-buildservice+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-buildservice+owner@opensuse.org
On 2012-11-30 23:07:28 (+1100), doiggl@velocitynet.com.au <doiggl@velocitynet.com.au> wrote:
Hello, I see about 110 jobs on obs with a job time greater than 2 or more hours processing time.[1] on Open Build Service (OBS)
Reason for asking. Improve the packaging times during busy times on (OBS).
Some questions. - Has SSD fusion io pci-e drive card ever been tried on obs - Would a SSD fusion io pci-e drive help shrink down the processing time on all obs jobs, therefore improving obs job turnaround times. - I know it costs for a card [Depends what capacity/model] - I saw some pci-e drive Manufacturer refurbished cards on ebay[2]
Think it's been 4 or 5 times you've spammed us with a commercial for fusion io. Do you work for that company ? -- -o) Pascal Bleser /\\ http://opensuse.org -- we haz green _\_v http://fosdem.org -- we haz conf
El 30/11/12 09:07, doiggl@velocitynet.com.au escribió:
Some questions. - Has SSD fusion io pci-e drive card ever been tried on obs - Would a SSD fusion io pci-e drive help shrink down the processing time on all obs jobs, therefore improving obs job turnaround times. - I know it costs for a card [Depends what capacity/model] - I saw some pci-e drive Manufacturer refurbished cards on ebay[2]
setting up the buildroots in RAM will be probably cheaper than buying those cards for a process that is volatile as building packages. -- To unsubscribe, e-mail: opensuse-buildservice+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-buildservice+owner@opensuse.org
On Friday 2012-11-30 18:01, Cristian Rodríguez wrote:
El 30/11/12 09:07, doiggl@velocitynet.com.au escribió:
Some questions. - Has SSD fusion io pci-e drive card ever been tried on obs - Would a SSD fusion io pci-e drive help shrink down the processing time on all obs jobs, therefore improving obs job turnaround times. - I know it costs for a card [Depends what capacity/model] - I saw some pci-e drive Manufacturer refurbished cards on ebay[2]
setting up the buildroots in RAM will be probably cheaper than buying those cards for a process that is volatile as building packages.
Just for the record again, using tmpfs is hardly any faster than a casual SSD, so using FusionIO will *NOT* really help. This is because created files will go into cache and are only being flushed later, opportunistically or when there is memory pressure. Given that the buildroot for many packages is below 1.5 GB, a 32 GB RAM machine can easily soak up lots of files - and once the (relatively) slow compilation phase starts, you don't care about background writeout. (In other words, tmpfs is not even required, which also fixes the danger of running into -ENOSPC on tmpfs.) -- To unsubscribe, e-mail: opensuse-buildservice+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-buildservice+owner@opensuse.org
On Friday 2012-11-30 13:07, doiggl@velocitynet.com.au wrote:
Hello, I see about 110 jobs on obs with a job time greater than 2 or more hours processing time.[1] on Open Build Service (OBS)
Reason for asking. Improve the packaging times during busy times on (OBS).
The problem is not disk, it's CPU. For example, build11 is a 2-core Opteron 8214 according to CPUID, but is configured for 24 parallel gcc instances [WORKER_INSTANCES=6, WORKER_JOBS=4]. It is only natural to feel "slow" with these "overbooking" settings. If you want to have that fixed, invest in some 64c machine(s) [the ones with 80c don't pay off ATM]. -- To unsubscribe, e-mail: opensuse-buildservice+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-buildservice+owner@opensuse.org
On Sun, Dec 9, 2012 at 4:40 PM, Jan Engelhardt <jengelh@inai.de> wrote:
Reason for asking. Improve the packaging times during busy times on (OBS).
The problem is not disk, it's CPU.
For example, build11 is a 2-core Opteron 8214 according to CPUID, but is configured for 24 parallel gcc instances [WORKER_INSTANCES=6, WORKER_JOBS=4]. It is only natural to feel "slow" with these "overbooking" settings.
Or don't overbook? What's the rationale for that WORKER_JOBS=4 there? If under-utilized build hosts were expected, I could understand a 3 (for I/O wait), but it's certainly not the case for OBS and even if it were, it's quite debatable. So, a 2 seems more like it to be optimum there (maybe not even). -- To unsubscribe, e-mail: opensuse-buildservice+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-buildservice+owner@opensuse.org
On Monday 2012-12-10 06:41, Claudio Freire wrote:
On Sun, Dec 9, 2012 at 4:40 PM, Jan Engelhardt <jengelh@inai.de> wrote:
Reason for asking. Improve the packaging times during busy times on (OBS).
The problem is not disk, it's CPU.
For example, build11 is a 2-core Opteron 8214 according to CPUID, but is configured for 24 parallel gcc instances [WORKER_INSTANCES=6, WORKER_JOBS=4]. It is only natural to feel "slow" with these "overbooking" settings.
Or don't overbook?
What's the rationale for that WORKER_JOBS=4 there?
Basically, to have all CPUs utilized at all times in the event that other worker instances are idle or I/O-bound. It is quite a sensible decision. Certainly, WORKER_JOBS=<number of CPUs in the physical machine> would be sufficient for that, i.e. WORKER_JOBS=2 for build11. And then, you do not want WORKER_JOBS=1, so that you have a chance to catch packages who stupidly broke parallel building. Solving the over-booking problem is made harder by the use of virtual machines. So far, one way I have found to position the fist near the eye is by 1. using WORKER_INSTANCES=<number of CPUs> [or desired utilization] 2. Making obs-worker use %_smp_mflags -jWORKER_INSTANCES -lWORKER_INSTANCES instead of the default %_smp_mflags -jWORKER_JOBS I tell you, make -l is awesome magic; it sort of autobalances your obs process tree between "1 worker using 24 gccs" and "24 workers using 1 gcc each" roughly. Needless to say, this requires that one always looks at the *host's* load, so make -l out of the box is useless for VM builds ATM. -- To unsubscribe, e-mail: opensuse-buildservice+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-buildservice+owner@opensuse.org
On Mon, Dec 10, 2012 at 3:06 AM, Jan Engelhardt <jengelh@inai.de> wrote:
2. Making obs-worker use %_smp_mflags -jWORKER_INSTANCES -lWORKER_INSTANCES instead of the default %_smp_mflags -jWORKER_JOBS
I tell you, make -l is awesome magic; it sort of autobalances your obs process tree between "1 worker using 24 gccs" and "24 workers using 1 gcc each" roughly. Needless to say, this requires that one always looks at the *host's* load, so make -l out of the box is useless for VM builds ATM.
Cool magic, I didn't know it. So that's why ARM builds take forever? -- To unsubscribe, e-mail: opensuse-buildservice+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-buildservice+owner@opensuse.org
On Monday 2012-12-10 16:22, Claudio Freire wrote:
On Mon, Dec 10, 2012 at 3:06 AM, Jan Engelhardt <jengelh@inai.de> wrote:
2. Making obs-worker use %_smp_mflags -jWORKER_INSTANCES -lWORKER_INSTANCES instead of the default %_smp_mflags -jWORKER_JOBS
I tell you, make -l is awesome magic; it sort of autobalances your obs process tree between "1 worker using 24 gccs" and "24 workers using 1 gcc each" roughly. Needless to say, this requires that one always looks at the *host's* load, so make -l out of the box is useless for VM builds ATM.
Cool magic, I didn't know it.
So that's why ARM builds take forever?
ARM is emulated. (At least I can't spot any arm-type workers in build.opensuse.org/monitor) -- To unsubscribe, e-mail: opensuse-buildservice+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-buildservice+owner@opensuse.org
On Mon, Dec 10, 2012 at 12:38 PM, Jan Engelhardt <jengelh@inai.de> wrote:
I tell you, make -l is awesome magic; it sort of autobalances your obs process tree between "1 worker using 24 gccs" and "24 workers using 1 gcc each" roughly. Needless to say, this requires that one always looks at the *host's* load, so make -l out of the box is useless for VM builds ATM.
Cool magic, I didn't know it.
So that's why ARM builds take forever?
ARM is emulated. (At least I can't spot any arm-type workers in build.opensuse.org/monitor)
Exactly, so make -l only sees VM load, not host load. For all others, it should see host load since last I checked OBS worked on a chroot, not a VM for "native" builds. So when there are many arm jobs on a host, the host slows to a crawl. Makes sense. -- To unsubscribe, e-mail: opensuse-buildservice+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-buildservice+owner@opensuse.org
On Monday 2012-12-10 16:43, Claudio Freire wrote:
On Mon, Dec 10, 2012 at 12:38 PM, Jan Engelhardt <jengelh@inai.de> wrote:
I tell you, make -l is awesome magic; it sort of autobalances your obs process tree between "1 worker using 24 gccs" and "24 workers using 1 gcc each" roughly. Needless to say, this requires that one always looks at the *host's* load, so make -l out of the box is useless for VM builds ATM.
Cool magic, I didn't know it.
So that's why ARM builds take forever?
ARM is emulated. (At least I can't spot any arm-type workers in build.opensuse.org/monitor)
Exactly, so make -l only sees VM load, not host load. For all others, it should see host load since last I checked OBS worked on a chroot, not a VM for "native" builds.
build.opensuse.org uses Xen/KVM. -- To unsubscribe, e-mail: opensuse-buildservice+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-buildservice+owner@opensuse.org
On Mon, Dec 10, 2012 at 1:54 PM, Jan Engelhardt <jengelh@inai.de> wrote:
On Mon, Dec 10, 2012 at 12:38 PM, Jan Engelhardt <jengelh@inai.de> wrote:
I tell you, make -l is awesome magic; it sort of autobalances your obs process tree between "1 worker using 24 gccs" and "24 workers using 1 gcc each" roughly. Needless to say, this requires that one always looks at the *host's* load, so make -l out of the box is useless for VM builds ATM.
Cool magic, I didn't know it.
So that's why ARM builds take forever?
ARM is emulated. (At least I can't spot any arm-type workers in build.opensuse.org/monitor)
Exactly, so make -l only sees VM load, not host load. For all others, it should see host load since last I checked OBS worked on a chroot, not a VM for "native" builds.
build.opensuse.org uses Xen/KVM.
If that's so, then I'd suggest -j WORKER_JOBS -l WORKER_INSTANCES (and setting WORKER_JOBS = # of cores per VM) Also, I've noticed dedicating one core to the dom0 improves I/O considerably. Though it's not practicable on dual-cores, when there are many cores it does help. If not done, there's no real parallelism of CPU vs I/O, because virtualization and task switching on the dom0 get in the way. Especially true for network I/O, which seems to be a weak spot in some Xen deployments on not-overly-new hardware with paravirtualization. -- To unsubscribe, e-mail: opensuse-buildservice+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-buildservice+owner@opensuse.org
On Monday 2012-12-10 18:40, Claudio Freire wrote:
build.opensuse.org uses Xen/KVM.
If that's so, then I'd suggest
-j WORKER_JOBS -l WORKER_INSTANCES
(and setting WORKER_JOBS = # of cores per VM)
Was there something that was unclear when I mentioned that -l is useless inside a VM? (because it does not reflect the $baremetal load) -- To unsubscribe, e-mail: opensuse-buildservice+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-buildservice+owner@opensuse.org
On Mon, Dec 10, 2012 at 8:56 PM, Jan Engelhardt <jengelh@inai.de> wrote:
On Monday 2012-12-10 18:40, Claudio Freire wrote:
build.opensuse.org uses Xen/KVM.
If that's so, then I'd suggest
-j WORKER_JOBS -l WORKER_INSTANCES
(and setting WORKER_JOBS = # of cores per VM)
Was there something that was unclear when I mentioned that -l is useless inside a VM? (because it does not reflect the $baremetal load)
Well the appliances don't seem to use VMs anyway, at least not the ones I'm using, so if a default is to be changes I thought it ought to accommodate both use cases. -- To unsubscribe, e-mail: opensuse-buildservice+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-buildservice+owner@opensuse.org
participants (5)
-
Claudio Freire
-
Cristian Rodríguez
-
doiggl@velocitynet.com.au
-
Jan Engelhardt
-
Pascal Bleser