[opensuse-packaging] New macro to limit resources allocation per thread
Hello all, For use in libreoffice, chromium and others I've created macro that should allow you to limit jobs based on some constraints you can set later on in the spec to avoid OOM crashes. The usage is pretty straight forward (Once it is accepted in Tumbleweed): === BuildRequires: memory-constraints %build # require 2GB mem per thread %limit_build -m 2000 make %{?_smp_mflags} ==== Here the _smp_mflags vaule for 8GB machine would be 4 while default is number of cores (lets say 16)... Both macros %jobs and %_smp_mflags are overriden as such the integration should be really painless if you need to do something like this. Tom
On 04/06/2018 03:52 PM, Tomas Chvatal wrote:
BuildRequires: memory-constraints
%build # require 2GB mem per thread %limit_build -m 2000 make %{?_smp_mflags}
Great job! This macro was sorely needed in OBS. I remember old boost has had some hax similar to this macro to try to limit number of parallel jobs. Of course, your version is much better. I suspect this macro is mostly needed on non-x86_64 arches though. Now we need this macro in SLE-15 too ;) Thanks, Adam -- To unsubscribe, e-mail: opensuse-packaging+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-packaging+owner@opensuse.org
On Mon, 9 Apr 2018, Adam Majer wrote:
On 04/06/2018 03:52 PM, Tomas Chvatal wrote:
BuildRequires: memory-constraints
%build # require 2GB mem per thread %limit_build -m 2000 make %{?_smp_mflags}
Great job!
This macro was sorely needed in OBS. I remember old boost has had some hax similar to this macro to try to limit number of parallel jobs. Of course, your version is much better.
I suspect this macro is mostly needed on non-x86_64 arches though.
Now we need this macro in SLE-15 too ;)
I still think this belongs to _constraints ... Richard. -- Richard Biener <rguenther@suse.de> SUSE LINUX GmbH, GF: Felix Imendoerffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nuernberg) -- To unsubscribe, e-mail: opensuse-packaging+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-packaging+owner@opensuse.org
On 04/09/2018 09:40 AM, Richard Biener wrote:
On Mon, 9 Apr 2018, Adam Majer wrote:
On 04/06/2018 03:52 PM, Tomas Chvatal wrote:
BuildRequires: memory-constraints
%build # require 2GB mem per thread %limit_build -m 2000 make %{?_smp_mflags}
Great job!
This macro was sorely needed in OBS. I remember old boost has had some hax similar to this macro to try to limit number of parallel jobs. Of course, your version is much better.
I suspect this macro is mostly needed on non-x86_64 arches though.
Now we need this macro in SLE-15 too ;)
I still think this belongs to _constraints ...
https://github.com/openSUSE/open-build-service/issues/1953 The problem with having _constraints is that OBS uses these as a simple filter. So a big machine with many cores and memory would be filtered out by this _constraints while all we need is not use all of the available cores on such a machine. So having thought about it, maybe the macro route is more flexible approach. - Adam -- To unsubscribe, e-mail: opensuse-packaging+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-packaging+owner@opensuse.org
On Mon, 9 Apr 2018, Adam Majer wrote:
On 04/09/2018 09:40 AM, Richard Biener wrote:
On Mon, 9 Apr 2018, Adam Majer wrote:
On 04/06/2018 03:52 PM, Tomas Chvatal wrote:
BuildRequires: memory-constraints
%build # require 2GB mem per thread %limit_build -m 2000 make %{?_smp_mflags}
Great job!
This macro was sorely needed in OBS. I remember old boost has had some hax similar to this macro to try to limit number of parallel jobs. Of course, your version is much better.
I suspect this macro is mostly needed on non-x86_64 arches though.
Now we need this macro in SLE-15 too ;)
I still think this belongs to _constraints ...
https://github.com/openSUSE/open-build-service/issues/1953
The problem with having _constraints is that OBS uses these as a simple filter. So a big machine with many cores and memory would be filtered out by this _constraints while all we need is not use all of the available cores on such a machine.
But then you lave those cores without work...
So having thought about it, maybe the macro route is more flexible approach.
Not sure. I think the VMs should "simply" be allocated with less CPUs for the job and thus free up CPUs for other VMs that are not so constrained. But I'm probably deaming up this kind of flexible VM deployment and the setup is static... Richard. -- Richard Biener <rguenther@suse.de> SUSE LINUX GmbH, GF: Felix Imendoerffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nuernberg) -- To unsubscribe, e-mail: opensuse-packaging+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-packaging+owner@opensuse.org
On Monday 2018-04-09 13:05, Richard Biener wrote:
The problem with having _constraints is that OBS uses these as a simple filter. So a big machine with many cores and memory would be filtered out by this _constraints while all we need is not use all of the available cores on such a machine.
But then you lave those cores without work...
So having thought about it, maybe the macro route is more flexible approach.
Not sure. I think the VMs should "simply" be allocated with less CPUs for the job and thus free up CPUs for other VMs that are not so constrained.
But I'm probably deaming up this kind of flexible VM deployment and the setup is static...
Some specfiles build with -j1. If bs_worker and bs_sched were a little smarter, bs_sched could give more of those j1-style jobs to a worker machine. Because right now, the worker provisioning assumes -j4 all the time. -- To unsubscribe, e-mail: opensuse-packaging+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-packaging+owner@opensuse.org
On 06/04/2018 15:52, Tomas Chvatal wrote:
Hello all,
For use in libreoffice, chromium and others I've created macro that should allow you to limit jobs based on some constraints you can set later on in the spec to avoid OOM crashes.
The usage is pretty straight forward (Once it is accepted in Tumbleweed):
=== BuildRequires: memory-constraints
%build # require 2GB mem per thread %limit_build -m 2000 make %{?_smp_mflags} ====
Here the _smp_mflags vaule for 8GB machine would be 4 while default is number of cores (lets say 16)...
Both macros %jobs and %_smp_mflags are overriden as such the integration should be really painless if you need to do something like this.
Tom
Somebody used this macro in my graphics/blender package as a replacement for a conditional that I had to decrease the number of build threads. They used -m 800 which appeared to work but intermittent failures due to oom are hard to trouble shoot. The person that had been testing blender git prior to the update to blender 2.80 informed me that there were intermittent oom build failures so I reduced -m to 600 but it's failing more now. I've increased it to 2G -m 2000 because it appears to me that requesting 2G per thread should result in less threads but I wonder why the original user made such a small value in the beginning? Am I right to assume that the higher the -m value the less resulting build threads and the less consumed memory? Thanks, Dave P -- To unsubscribe, e-mail: opensuse-packaging+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-packaging+owner@opensuse.org
Dave Plater píše v Čt 22. 08. 2019 v 12:44 +0200:
On 06/04/2018 15:52, Tomas Chvatal wrote:
Hello all,
For use in libreoffice, chromium and others I've created macro that should allow you to limit jobs based on some constraints you can set later on in the spec to avoid OOM crashes.
The usage is pretty straight forward (Once it is accepted in Tumbleweed):
=== BuildRequires: memory-constraints
%build # require 2GB mem per thread %limit_build -m 2000 make %{?_smp_mflags} ====
Here the _smp_mflags vaule for 8GB machine would be 4 while default is number of cores (lets say 16)...
Both macros %jobs and %_smp_mflags are overriden as such the integration should be really painless if you need to do something like this.
Tom
Somebody used this macro in my graphics/blender package as a replacement for a conditional that I had to decrease the number of build threads. They used -m 800 which appeared to work but intermittent failures due to oom are hard to trouble shoot. The person that had been testing blender git prior to the update to blender 2.80 informed me that there were intermittent oom build failures so I reduced -m to 600 but it's failing more now. I've increased it to 2G -m 2000 because it appears to me that requesting 2G per thread should result in less threads but I wonder why the original user made such a small value in the beginning? Am I right to assume that the higher the -m value the less resulting build threads and the less consumed memory?
The macro works as 'how much memory you need for one core' so having -m 800 means it will be 800MB per core. If you use 2000 it will be 2GB per core and thus limit the threading even more. Tom
On 22/08/2019 13:21, Tomas Chvatal wrote:
Dave Plater píše v Čt 22. 08. 2019 v 12:44 +0200:
On 06/04/2018 15:52, Tomas Chvatal wrote:
Hello all,
For use in libreoffice, chromium and others I've created macro that should allow you to limit jobs based on some constraints you can set later on in the spec to avoid OOM crashes.
The usage is pretty straight forward (Once it is accepted in Tumbleweed):
=== BuildRequires: memory-constraints
%build # require 2GB mem per thread %limit_build -m 2000 make %{?_smp_mflags} ====
Here the _smp_mflags vaule for 8GB machine would be 4 while default is number of cores (lets say 16)...
Both macros %jobs and %_smp_mflags are overriden as such the integration should be really painless if you need to do something like this.
Tom
Somebody used this macro in my graphics/blender package as a replacement for a conditional that I had to decrease the number of build threads. They used -m 800 which appeared to work but intermittent failures due to oom are hard to trouble shoot. The person that had been testing blender git prior to the update to blender 2.80 informed me that there were intermittent oom build failures so I reduced -m to 600 but it's failing more now. I've increased it to 2G -m 2000 because it appears to me that requesting 2G per thread should result in less threads but I wonder why the original user made such a small value in the beginning? Am I right to assume that the higher the -m value the less resulting build threads and the less consumed memory?
The macro works as 'how much memory you need for one core' so having -m 800 means it will be 800MB per core. If you use 2000 it will be 2GB per core and thus limit the threading even more.
Thanks, that's what the wiki implies. Thanks to Jan for the formula, I'll put it in the wiki. I think the experienced packager who put the macro in the spec file was confused. Dave P -- To unsubscribe, e-mail: opensuse-packaging+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-packaging+owner@opensuse.org
On Donnerstag, 22. August 2019 15:13:27 CEST Dave Plater wrote:
Somebody used this macro in my graphics/blender package as a replacement for a conditional that I had to decrease the number of build threads. They used -m 800 which appeared to work but intermittent failures due to oom are hard to trouble shoot. The person that had been testing blender git prior to the update to blender 2.80 informed me that there were intermittent oom build failures so I reduced -m to 600 but it's failing more now. I've increased it to 2G -m 2000 because it appears to me that requesting 2G per thread should result in less threads but I wonder why the original user made such a small value in the beginning? Am I right to assume that the higher the -m value the less resulting build threads and the less consumed memory?
The macro works as 'how much memory you need for one core' so having -m 800 means it will be 800MB per core. If you use 2000 it will be 2GB per core and thus limit the threading even more.
Thanks, that's what the wiki implies. Thanks to Jan for the formula, I'll put it in the wiki. I think the experienced packager who put the macro in the spec file was confused.
At the time the limit was added, the value was totally appropriate, and the package built successfully several hundreds of times in the graphics project and in Factory. There are several possible reasons why the built started to fail, e.g. the compiler has been updated several times, LTO has been enabled by default, also the libraries used have been updated. All this can cause a slight or even a significant increase of required memory. The fact the build occasionaly succeeded suggests the assumed average of 800 MByte per thread is not too far off, so a sufficient value appropriately is 900 or 1000 MByte per thread. Reducing it to 600 MByte of course is a change in the wrong direction. Regards, Stefan -- To unsubscribe, e-mail: opensuse-packaging+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-packaging+owner@opensuse.org
On 22/08/2019 17:34, Brüns, Stefan wrote:
On Donnerstag, 22. August 2019 15:13:27 CEST Dave Plater wrote:
Somebody used this macro in my graphics/blender package as a replacement for a conditional that I had to decrease the number of build threads. They used -m 800 which appeared to work but intermittent failures due to oom are hard to trouble shoot. The person that had been testing blender git prior to the update to blender 2.80 informed me that there were intermittent oom build failures so I reduced -m to 600 but it's failing more now. I've increased it to 2G -m 2000 because it appears to me that requesting 2G per thread should result in less threads but I wonder why the original user made such a small value in the beginning? Am I right to assume that the higher the -m value the less resulting build threads and the less consumed memory?
The macro works as 'how much memory you need for one core' so having -m 800 means it will be 800MB per core. If you use 2000 it will be 2GB per core and thus limit the threading even more.
Thanks, that's what the wiki implies. Thanks to Jan for the formula, I'll put it in the wiki. I think the experienced packager who put the macro in the spec file was confused.
At the time the limit was added, the value was totally appropriate, and the package built successfully several hundreds of times in the graphics project and in Factory.
There are several possible reasons why the built started to fail, e.g. the compiler has been updated several times, LTO has been enabled by default, also the libraries used have been updated. All this can cause a slight or even a significant increase of required memory.
The fact the build occasionaly succeeded suggests the assumed average of 800 MByte per thread is not too far off, so a sufficient value appropriately is 900 or 1000 MByte per thread. Reducing it to 600 MByte of course is a change in the wrong direction.
Regards,
Stefan
I've seen an lto= value greater than the resulting thread number in one failed build so far which means that there is a bug somewhere. It's building atm but I'll watch what happens. I can't understand the lto value being greater than the jobs number AFAIK it gets it's value from the resulting jobs value but I may be wrong. Regards Dave -- To unsubscribe, e-mail: opensuse-packaging+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-packaging+owner@opensuse.org
On Thursday 2019-08-22 12:44, Dave Plater wrote:
Somebody used this macro in my graphics/blender package as a replacement for a conditional that I had to decrease the number of build threads. They used -m 800 which appeared to work but intermittent failures due to oom are hard to trouble shoot. The person that had been testing blender git prior to the update to blender 2.80 informed me that there were intermittent oom build failures so I reduced -m to 600 but it's failing more now. I've increased it to 2G -m 2000 because it appears to me that requesting 2G per thread should result in less threads but I wonder why the original user made such a small value in the beginning?
Timing effects. Sometimes, a compiler invocation is already done with compilation and has unwound most of its memory while at other times, you get unlucky and all compiler instances are at their max memory usage when they happen to be in the most-memory-intensive stage of compilation.
Am I right to assume that the higher the -m value the less resulting build threads and the less consumed memory?
threads = memory_in_vm / the_M_number -- To unsubscribe, e-mail: opensuse-packaging+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-packaging+owner@opensuse.org
participants (7)
-
Adam Majer
-
Brüns, Stefan
-
Dave Plater
-
Jan Engelhardt
-
Richard Biener
-
Tomas Chvatal
-
Tomas Chvatal