[yast-devel] Build Times Strikes Back
Hi, today when I am waiting for OBS to build new ci container for testing new rubocop I check how situation changes after our last effort to reduce build time of yast stack. I use our old tool [1] and there is also old data[2]. So I can compare critical path ( which is the most interesting, as it count minimum time needed to build yast stack if we have same machines, but in unlimited amount ). So how it changes? critical path: - total time: 1780 + total time: 3452 path: - - yast2-schema [141s] - - autoyast2 [168s] - - yast2-update [143s] - - yast2-packager [75s] - - yast2-storage [226s] - - yast2-perl-bindings [236s] - - yast2-ruby-bindings [181s] - - yast2-ycp-ui-bindings [107s] - - yast2-core [426s] - - yast2-devtools [77s] + - yast2-schema [134s] + - yast2-ntp-client [93s] + - autoyast2 [723s] + - yast2-installation [721s] + - yast2-network [171s] + - yast2-packager [93s] + - yast2-storage-ng [186s] + - libstorage-ng [1331s] The first think that is visible immediatelly that it increase a lot. From cca 30 minutes to almost 60 minutes ( so no surprise I am waiting for new release :) So I check more deeply some values, as some times looks strange. Sometimes it looks like strange behavior of OBS because total time is much bigger then steps like for yast2-installation[3] or autoyast2[4], but for libstrorage-ng it looks like real data[5]. So I check if libstorage-ng is not unlucky and simply does not hit weak machine, but it builds with -j6 which is not bad from my POV. So I do quick analyse where it spends time ( from log [6]): total: 1327s unpacking: 1s compilation (configure + make all): 921s ( doxygen: 10s, translations: 0s, bindings: 542s, ) install: 8s deduplication: 2s tests: 260s To compare, old libstorage need in total 549 seconds. So there is really slow down in build time. Question is how to speed up building process? Any ideas? I think 20 minutes for the initial building stone of all yast modules is too much. Few wild ideas: - create libstorage-ng-bootstrap that will be used for building yast2-storage-ng. That bootstrap will skip tests and pythong bindings and maybe even compile with less aggresive g++ options, which should help a lot. And of course then proper package is build that will have all this. - build python and ruby bindings in parallel. Is it doable? Or ideally do it in parallel to other compilation tasks. ( not sure if it does not hit us back with disk seeking ). Here non-recursive feature of autotools can help[7]. Any other idea that can help? Josef [1] https://github.com/mvidner/rpm-build-dependencies [2] https://github.com/mvidner/rpm-build-dependencies/blob/master/yast_deps.yaml [3] https://build.opensuse.org/package/statistics/YaST:Head/yast2-installation/o... [4] https://build.opensuse.org/package/statistics/YaST:Head/autoyast2/openSUSE_F... [5] https://build.opensuse.org/package/statistics/YaST:Head/libstorage-ng/openSU... [6] https://build.opensuse.org/build/YaST:Head/openSUSE_Factory/x86_64/libstorag... [7] https://autotools.io/automake/parallel.html -- To unsubscribe, e-mail: yast-devel+unsubscribe@opensuse.org To contact the owner, e-mail: yast-devel+owner@opensuse.org
On wto, cze 25, 2019 at 5:02 PM, Josef Reidinger <jreidinger@suse.cz> wrote:
Hi, today when I am waiting for OBS to build new ci container for testing new rubocop I check how situation changes after our last effort to reduce build time of yast stack.
I use our old tool [1] and there is also old data[2]. So I can compare critical path ( which is the most interesting, as it count minimum time needed to build yast stack if we have same machines, but in unlimited amount ).
So how it changes?
critical path: - total time: 1780 + total time: 3452 path: - - yast2-schema [141s] - - autoyast2 [168s] - - yast2-update [143s] - - yast2-packager [75s] - - yast2-storage [226s] - - yast2-perl-bindings [236s] - - yast2-ruby-bindings [181s] - - yast2-ycp-ui-bindings [107s] - - yast2-core [426s] - - yast2-devtools [77s] + - yast2-schema [134s] + - yast2-ntp-client [93s] + - autoyast2 [723s] + - yast2-installation [721s] + - yast2-network [171s] + - yast2-packager [93s] + - yast2-storage-ng [186s] + - libstorage-ng [1331s]
The first think that is visible immediatelly that it increase a lot. From cca 30 minutes to almost 60 minutes ( so no surprise I am waiting for new release :)
So I check more deeply some values, as some times looks strange. Sometimes it looks like strange behavior of OBS because total time is much bigger then steps like for yast2-installation[3] or autoyast2[4], but for libstrorage-ng it looks like real data[5]. So I check if libstorage-ng is not unlucky and simply does not hit weak machine, but it builds with -j6 which is not bad from my POV. So I do quick analyse where it spends time ( from log [6]):
total: 1327s unpacking: 1s compilation (configure + make all): 921s ( doxygen: 10s, translations: 0s, bindings: 542s, ) install: 8s deduplication: 2s tests: 260s
To compare, old libstorage need in total 549 seconds. So there is really slow down in build time. Question is how to speed up building process? Any ideas? I think 20 minutes for the initial building stone of all yast modules is too much.
Few wild ideas:
- create libstorage-ng-bootstrap that will be used for building yast2-storage-ng. That bootstrap will skip tests and pythong bindings and maybe even compile with less aggresive g++ options, which should help a lot. And of course then proper package is build that will have all this.
- build python and ruby bindings in parallel. Is it doable? Or ideally do it in parallel to other compilation tasks. ( not sure if it does not hit us back with disk seeking ). Here non-recursive feature of autotools can help[7].
Any other idea that can help?
I would be curious about how quick meson/ninja would as a replacement for configure/make all. It promises to be quite a bit faster compared to other tools ;) LCP [Stasiek] https://lcp.world -- To unsubscribe, e-mail: yast-devel+unsubscribe@opensuse.org To contact the owner, e-mail: yast-devel+owner@opensuse.org
V Tue, 25 Jun 2019 17:05:31 +0200 Stasiek Michalski <hellcp@opensuse.org> napsáno:
On wto, cze 25, 2019 at 5:02 PM, Josef Reidinger <jreidinger@suse.cz> wrote:
Hi, <snip>
Any other idea that can help?
I would be curious about how quick meson/ninja would as a replacement for configure/make all. It promises to be quite a bit faster compared to other tools ;)
LCP [Stasiek] https://lcp.world
Hi, sounds like interesting idea. Feel free to play with it and measure it. Ideally run on your machine ( so no side effects from different build host ) five times rake osc:build with old and same amount with new and try to compare average times with removal of extremes that can happen. I would be also curious, if we really spend time in g++ and swig or if it some ineffeciency in autotools respective its work distribution. Josef -- To unsubscribe, e-mail: yast-devel+unsubscribe@opensuse.org To contact the owner, e-mail: yast-devel+owner@opensuse.org
On 6/26/19 7:42 AM, Josef Reidinger wrote:
V Tue, 25 Jun 2019 17:05:31 +0200 Stasiek Michalski <hellcp@opensuse.org> napsáno:
On wto, cze 25, 2019 at 5:02 PM, Josef Reidinger <jreidinger@suse.cz> wrote:
Hi, <snip>
Any other idea that can help?
I would be curious about how quick meson/ninja would as a replacement for configure/make all. It promises to be quite a bit faster compared to other tools ;)
LCP [Stasiek] https://lcp.world
In fact, my plan was to play with libstorage-ng + mason during this week, but I am still working on another thing :)
Hi, sounds like interesting idea. Feel free to play with it and measure it. Ideally run on your machine ( so no side effects from different build host ) five times rake osc:build with old and same amount with new and try to compare average times with removal of extremes that can happen. I would be also curious, if we really spend time in g++ and swig or if it some ineffeciency in autotools respective its work distribution.
Josef
-- To unsubscribe, e-mail: yast-devel+unsubscribe@opensuse.org To contact the owner, e-mail: yast-devel+owner@opensuse.org
On Tue, Jun 25, 2019 at 05:02:43PM +0200, Josef Reidinger wrote: Hi.
today when I am waiting for OBS to build new ci container for testing new rubocop I check how situation changes after our last effort to reduce build time of yast stack.
What actually caused a rebuild of libstorage-ng? libstorage-ng does not depend on rubocop.
To compare, old libstorage need in total 549 seconds. So there is really slow down in build time. Question is how to speed up building process? Any ideas? I think 20 minutes for the initial building stone of all yast modules is too much.
The build times look high to me. On my machine 'osc build' takes less than 500s (uses make -j8).
- create libstorage-ng-bootstrap that will be used for building yast2-storage-ng. That bootstrap will skip tests and pythong bindings and maybe even compile with less aggresive g++ options, which should help a lot. And of course then proper package is build that will have all this.
Both the Pyhton and Ruby bindings could be build in separate packages. But that would (likely) have to downside that the bindings cannot be build in parallel with the library.
- build python and ruby bindings in parallel. Is it doable? Or ideally do it in parallel to other compilation tasks. ( not sure if it does not hit us back with disk seeking ). Here non-recursive feature of autotools can help[7].
That should work. I have a hackish bash script that does so (even in parallel to the rest) and uses about 6m. ciao Arvin -- Arvin Schnell, <aschnell@suse.com> Senior Software Engineer, Research & Development SUSE Linux GmbH Maxfeldstraße 5 90409 Nürnberg Germany GF: Felix Imendörffer, Mary Higgins, Sri Rasiah HRB 21284 (AG Nürnberg) -- To unsubscribe, e-mail: yast-devel+unsubscribe@opensuse.org To contact the owner, e-mail: yast-devel+owner@opensuse.org
V Tue, 25 Jun 2019 15:46:45 +0000 Arvin Schnell <aschnell@suse.com> napsáno:
On Tue, Jun 25, 2019 at 05:02:43PM +0200, Josef Reidinger wrote:
Hi.
today when I am waiting for OBS to build new ci container for testing new rubocop I check how situation changes after our last effort to reduce build time of yast stack.
What actually caused a rebuild of libstorage-ng? libstorage-ng does not depend on rubocop.
Hi, it really does not depend, I am just checking build times when waiting for full rebuild due to changes in yast2-devtools (that does not trigger libstorage-ng rebuild) and want to share what I found.
To compare, old libstorage need in total 549 seconds. So there is really slow down in build time. Question is how to speed up building process? Any ideas? I think 20 minutes for the initial building stone of all yast modules is too much.
The build times look high to me. On my machine 'osc build' takes less than 500s (uses make -j8).
Maybe some workers specific? I check more places where it builds and in factory it took 2261 seconds[1]. Maybe disk seeking? or less powerfull cpu? In this case it is j4, so still not two times slower then your j8. I found that peek memory usage is 2708 Mbyte ( visible in [1] ), so maybe just adding constrain file that require at least 4GiB of RAM can help to prevent swaping and also helps with caching of disk content?
- create libstorage-ng-bootstrap that will be used for building yast2-storage-ng. That bootstrap will skip tests and pythong bindings and maybe even compile with less aggresive g++ options, which should help a lot. And of course then proper package is build that will have all this.
Both the Pyhton and Ruby bindings could be build in separate packages. But that would (likely) have to downside that the bindings cannot be build in parallel with the library.
Probably not worth it as yast2-storage-ng depends on ruby-bindings, so still whole yast stack will be blocked.
- build python and ruby bindings in parallel. Is it doable? Or ideally do it in parallel to other compilation tasks. ( not sure if it does not hit us back with disk seeking ). Here non-recursive feature of autotools can help[7].
That should work. I have a hackish bash script that does so (even in parallel to the rest) and uses about 6m.
6m just bindings? or whole build? maybe it would be worth try. Or maybe really try that less hacking non recursive Makefile.am or alternative build tool. Not sure what will probide the most benefits.
ciao Arvin
Josef [1] https://build.opensuse.org/package/statistics/openSUSE:Factory/libstorage-ng... -- To unsubscribe, e-mail: yast-devel+unsubscribe@opensuse.org To contact the owner, e-mail: yast-devel+owner@opensuse.org
On Wed, Jun 26, 2019 at 10:39:56AM +0200, Josef Reidinger wrote:
The build times look high to me. On my machine 'osc build' takes less than 500s (uses make -j8).
Maybe some workers specific? I check more places where it builds and in factory it took 2261 seconds[1]. Maybe disk seeking? or less powerfull cpu? In this case it is j4, so still not two times slower then your j8. I found that peek memory usage is 2708 Mbyte ( visible in [1] ), so maybe just adding constrain file that require at least 4GiB of RAM can help to prevent swaping and also helps with caching of disk content?
Could be worth a try since setting a constraint of 4 GiB physical memory does decrease the number of available build hosts on Intel. https://openbuildservice.org/help/manuals/obs-reference-guide/cha.obs.build_....
That should work. I have a hackish bash script that does so (even in parallel to the rest) and uses about 6m.
6m just bindings? or whole build? maybe it would be worth try. Or maybe really try that less hacking non recursive Makefile.am or alternative build tool. Not sure what will probide the most benefits.
6m for everything (library, bindings, tests, examples) after 'make clean'. ciao Arvin -- Arvin Schnell, <aschnell@suse.com> Senior Software Engineer, Research & Development SUSE Linux GmbH Maxfeldstraße 5 90409 Nürnberg Germany GF: Felix Imendörffer, Mary Higgins, Sri Rasiah HRB 21284 (AG Nürnberg) -- To unsubscribe, e-mail: yast-devel+unsubscribe@opensuse.org To contact the owner, e-mail: yast-devel+owner@opensuse.org
On Wed, Jun 26, 2019 at 10:39:56AM +0200, Josef Reidinger wrote:
Maybe some workers specific? I check more places where it builds and in factory it took 2261 seconds[1]. Maybe disk seeking? or less powerfull cpu? In this case it is j4, so still not two times slower then your j8. I found that peek memory usage is 2708 Mbyte ( visible in [1] ), so maybe just adding constrain file that require at least 4GiB of RAM can help to prevent swaping and also helps with caching of disk content?
I discussed that today with a build service expert. - The intention of constraints are hard requirement: Without them the build would fail. - They can backfire: Waiting for a powerful machine could take longer than the build on a less powerful but available machine would have taken. Not so likely on Intel but could happen on ARM. - Unfortunately it is not possible to specify a minimal memory size based on the number of cores. - That some people want cetain packages to build fast may collide with expectations of other people and may decrease the overall build time for the project or other projects. ciao Arvin -- Arvin Schnell, <aschnell@suse.com> Senior Software Engineer, Research & Development SUSE Linux GmbH Maxfeldstraße 5 90409 Nürnberg Germany GF: Felix Imendörffer, Mary Higgins, Sri Rasiah HRB 21284 (AG Nürnberg) -- To unsubscribe, e-mail: yast-devel+unsubscribe@opensuse.org To contact the owner, e-mail: yast-devel+owner@opensuse.org
participants (4)
-
Arvin Schnell
-
Josef Reidinger
-
José Iván López González
-
Stasiek Michalski