On Thu, 5 Nov 2015 14:14:33 +0100
Arvin Schnell
On Thu, Nov 05, 2015 at 12:51:43PM +0100, Josef Reidinger wrote:
Maintain such VMs? Those are standard images just started. If the VM has a unused disk some integration tests for libstorage and snapper would already be possible.
keeping it up to date, world around changed, so also VM need some love. I already have over 20 vms for various env and for older one, there is always problems when started that something outside changed ( e.g. YaST:Devel no longer support the old distribution, maintenance updates released, etc. )
No, you are not taking about the cloud but just a hand full of machines. I propose to just take the latest image from the buildservice, create a VM and start it. Afterwards you delete the VM again. I have heard Amazon makes good money with that concept.
you need to maintain such images, or you think someone magically create for you latest snapper devel image? Or you have to always install it from beginning to get all required packages and dependencies...but wait, maybe osc already do it.
jenkins now use osc which use chroot, so more docker like solution. Why is using VMs better then using osc chroot?
Well, if Jenkins already uses chroot we can create the package there. That would not require changes to the source code. Or am I missing something?
No, as osc need 1) spec file ( now you need whole autotools monster with its dependencies to generate it ) 2) tarball ( same problem with autotools beast ) That is reason why yast now have simple command to create tarball, spec files are not generated and rest is done inside osc that ensure proper environment according to requirements from spec file.
And it is not image for each distribution, it is image for each distribution and each package, as you env is different for each package ( devel libraries for yast2-core is different then for snapper ).
That is entirely exaggerated: On my machine I can work on libstorage, snapper and yast2-core without having to change the set of installed packages. Maybe that's different in the Ruby world but for C/C++ some extra libraries in general do not hurt.
so you propose to create such heavy weight beast image that contain all development libraries in our cloud setup? And ensure everything is up to date. And to be honest it is not some, it is a lot of libraries, its devel packages and generators ( like bison or flex ). So you create for all distribution such beast with tons of extra libraries and then using it? And do you expect that newcomers do it same way? I think maybe some other people that touch yast and snapper or libstorage can compare how hard/easy is to contribute and use such infrastructure, how well it is documented and so on.
And also do not forget about interdependencies. New snapper maybe need new libbtrfs library, so it also have to be updated.
By taking the latest image from the buildservice the library is up to date. If not the build will also fail and submitting is not an option.
So you expect buildservice create such beast image that contain all devel libraries? Do you ask build service team how they like it? Because every small change trigger rebuild of that beast having tenths of GB
And here you want a complete system where you can compile the sources directly from your editor. So you need a VM/system.
No, ...
I do want that and I'm not alone with that view.
As I said, having simple way to have spec file and tarball do not break your workflow with compilation in your VMs. I just do not like that your view is now only way how to do it (r). Josef
Regards, Arvin
-- To unsubscribe, e-mail: yast-devel+unsubscribe@opensuse.org To contact the owner, e-mail: yast-devel+owner@opensuse.org