![](https://seccdn.libravatar.org/avatar/5b748275c3dbb1ceee18ed554486547d.jpg?s=120&d=mm&r=g)
On Thursday 2019-08-22 12:44, Dave Plater wrote:
Somebody used this macro in my graphics/blender package as a replacement for a conditional that I had to decrease the number of build threads. They used -m 800 which appeared to work but intermittent failures due to oom are hard to trouble shoot. The person that had been testing blender git prior to the update to blender 2.80 informed me that there were intermittent oom build failures so I reduced -m to 600 but it's failing more now. I've increased it to 2G -m 2000 because it appears to me that requesting 2G per thread should result in less threads but I wonder why the original user made such a small value in the beginning?
Timing effects. Sometimes, a compiler invocation is already done with compilation and has unwound most of its memory while at other times, you get unlucky and all compiler instances are at their max memory usage when they happen to be in the most-memory-intensive stage of compilation.
Am I right to assume that the higher the -m value the less resulting build threads and the less consumed memory?
threads = memory_in_vm / the_M_number -- To unsubscribe, e-mail: opensuse-packaging+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-packaging+owner@opensuse.org