On Monday 2012-12-10 06:41, Claudio Freire wrote:
On Sun, Dec 9, 2012 at 4:40 PM, Jan Engelhardt <jengelh@inai.de> wrote:
Reason for asking. Improve the packaging times during busy times on (OBS).
The problem is not disk, it's CPU.
For example, build11 is a 2-core Opteron 8214 according to CPUID, but is configured for 24 parallel gcc instances [WORKER_INSTANCES=6, WORKER_JOBS=4]. It is only natural to feel "slow" with these "overbooking" settings.
Or don't overbook?
What's the rationale for that WORKER_JOBS=4 there?
Basically, to have all CPUs utilized at all times in the event that other worker instances are idle or I/O-bound. It is quite a sensible decision. Certainly, WORKER_JOBS=<number of CPUs in the physical machine> would be sufficient for that, i.e. WORKER_JOBS=2 for build11. And then, you do not want WORKER_JOBS=1, so that you have a chance to catch packages who stupidly broke parallel building. Solving the over-booking problem is made harder by the use of virtual machines. So far, one way I have found to position the fist near the eye is by 1. using WORKER_INSTANCES=<number of CPUs> [or desired utilization] 2. Making obs-worker use %_smp_mflags -jWORKER_INSTANCES -lWORKER_INSTANCES instead of the default %_smp_mflags -jWORKER_JOBS I tell you, make -l is awesome magic; it sort of autobalances your obs process tree between "1 worker using 24 gccs" and "24 workers using 1 gcc each" roughly. Needless to say, this requires that one always looks at the *host's* load, so make -l out of the box is useless for VM builds ATM. -- To unsubscribe, e-mail: opensuse-buildservice+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-buildservice+owner@opensuse.org