Hi, I'm currently trying to understand how parameters are used to call the obs worker virtual machine. If I run a local build via "osc build --vm-type kvm ..." the parameters are set by the code from the build package and script in: /usr/lib/build/build-vm-kvm If the build is done by the obs server backend it looks different and I thought how are the parameters defined when running your own OBS server ? Can someone point me to the right direction ? Thanks Regards, Marcus -- Public Key available via: https://keybase.io/marcus_schaefer/key.asc keybase search marcus_schaefer ------------------------------------------------------- Marcus Schäfer Am Unterösch 9 Tel: +49 7562 905437 D-88316 Isny / Rohrdorf Germany -------------------------------------------------------
On Montag, 10. Januar 2022, 13:34:14 CET Marcus Schäfer wrote:
Hi,
I'm currently trying to understand how parameters are used to call the obs worker virtual machine. If I run a local build via "osc build --vm-type kvm ..." the parameters are set by the code from the build package and script in:
/usr/lib/build/build-vm-kvm
If the build is done by the obs server backend it looks different and I thought how are the parameters defined when running your own OBS server ?
Can someone point me to the right direction ?
it should be the same, but it may depend on your hardware.
You need to ask a bit more specific, which differences you see exactly.
--
Adrian Schroeter
Hi,
You need to ask a bit more specific, which differences you see exactly.
Thanks for the quick reply and yes I should be more specific. So the issue is on building an aarch64 package on a custom obs server instance. The worker calls qemu as follows: [ 0s] Using BUILD_ROOT=/obsworker/worker/root_2/.mount [ 0s] Using BUILD_ARCH=aarch64:aarch64_ilp32:armv8l [ 0s] Doing kvm build in /obsworker/worker/root_2/root [ 0s] ... [ 20s] booting kvm... [ 20s] ### VM INTERACTION START ### [ 20s] Using UART console [ 20s] /usr/bin/qemu-system-aarch64 -nodefaults -no-reboot -nographic -vga none -cpu host -enable-kvm -M virt,gic-version=host -object rng-random,filename=/dev/random,id=rng0 -device virtio-rng-device,rng=rng0 -runas qemu -net none -kernel /boot/Image -initrd /boot/initrd -append root=/dev/disk/by-id/virtio-0 rootfstype=ext3 rootflags=data=writeback,nobarrier,commit=150,noatime ext4.allow_unsupported=1 mitigations=off panic=1 quiet no-kvmclock elevator=noop nmi_watchdog=0 rw rd.driver.pre=binfmt_misc console=ttyAMA0 init=/.build/build -m 8192 -drive file=/obsworker/worker/root_2/root,format=raw,if=none,id=disk,cache=unsafe -device virtio-blk-device,drive=disk,serial=0 -drive file=/obsworker/worker/root_2/swap,format=raw,if=none,id=swap,cache=unsafe -device virtio-blk-device,drive=swap,serial=1 -serial stdio -chardev socket,id=monitor,server,nowait,path=/obsworker/worker/root_2/root.qemu/monitor -mon chardev=monitor,mode=readline -smp 8 and fails after some time with... [ 129s] [[0m[0;31m* [0m] A start job is running for dev-disk…virtio\x2d0.de The device never appears After debugging we found the reason in the different values for -initrd and -kernel. If take the above call and only replace -initrd and -kernel as follows: -kernel /obsworker/worker/root_2/.mount/boot/kernel -initrd /obsworker/worker/root_2/.mount/boot/initrd The build works and succeeds. I'm pretty sure this is a configuration issue on this server but I have no clue were to look at. Any help is much appreciated. Thanks Regards, Marcus -- Public Key available via: https://keybase.io/marcus_schaefer/key.asc keybase search marcus_schaefer ------------------------------------------------------- Marcus Schäfer Am Unterösch 9 Tel: +49 7562 905437 D-88316 Isny / Rohrdorf Germany -------------------------------------------------------
On Jan 10 2022, Marcus Schäfer wrote:
After debugging we found the reason in the different values for -initrd and -kernel. If take the above call and only replace -initrd and -kernel as follows:
-kernel /obsworker/worker/root_2/.mount/boot/kernel -initrd /obsworker/worker/root_2/.mount/boot/initrd
The build works and succeeds.
It think you are missing "VMinstall: kernel-obs-build" in the prjconf. -- Andreas Schwab, schwab@linux-m68k.org GPG Key fingerprint = 7578 EB47 D4E5 4D69 2510 2552 DF73 E780 A9DA AEC1 "And now for something completely different."
Hi,
It think you are missing "VMinstall: kernel-obs-build" in the prjconf.
Thanks, I tried that and it results in an unresolvable. It seems there is no kernel-obs-build for aarch64 available. I wanted to understand the concept better and was reading more code. Please correct me when wrong. In the build package code there is: /usr/lib/build/build-vm-kvm and ---snip armv8l|aarch64) kvm_bin="/usr/bin/qemu-system-aarch64" kvm_console=ttyAMA0 vm_kernel=/boot/Image vm_initrd=/boot/initrd test -e /boot/kernel.obs.guest && vm_kernel=/boot/kernel.obs.guest test -e /boot/initrd.obs.guest && vm_initrd=/boot/initrd.obs.guest ---snap This is the issue we see. The worker does not provide /boot/kernel.obs.guest and /boot/initrd.obs.guest. The default settings for kernel and initrd applies but that doesn't work. The suggested instruction: VMinstall: kernel-obs-build would install the package on the worker prior the build-vm-kvm code and this would lead to the use of obs.guest initrd/kernel files. Is that correct ? Next I looked at one of my packages that builds successfully on build.opensuse.org. In the log I see: [ 1s] unpacking preinstall image Ubuntu:debbuild/Ubuntu_20.04/preinstallimage-base [6fe2b94cfc6f3d08b3f38c628716bdc5] [ 4s] booting kvm... [ 4s] ### VM INTERACTION START ### [ 4s] Using UART console [ 4s] /usr/bin/qemu-system-aarch64 ... -kernel /boot/kernel.obs.guest -initrd /boot/initrd.obs.guest ... There is no extra install of kernel-obs-build. Instead a preinstall image which probably provides the required data is used. I see this behavior when building for Ubuntu using debbuild. Package builds for e.g Leap, Fedora on aarch64 seems to provide a kernel-obs-build package and contains: [ 15s] [8/47] preinstalling kernel-obs-build... In all of this situations the location for initrd and kernel is passed to kvm as -kernel /var/cache/obs/worker/root_13/.mount/boot/kernel -initrd /var/cache/obs/worker/root_13/.mount/boot/initrd Which was weird because I thought it would also be *.obs.guest Can you advise what the preferred solution is to fix this OBS instance to allow package builds on the aarch64 workers ? Sorry if the questions sounds dumb, I'm just reading my way through the code and got confused here and there Thanks Regards, Marcus -- Public Key available via: https://keybase.io/marcus_schaefer/key.asc keybase search marcus_schaefer ------------------------------------------------------- Marcus Schäfer Am Unterösch 9 Tel: +49 7562 905437 D-88316 Isny / Rohrdorf Germany -------------------------------------------------------
On Montag, 10. Januar 2022, 21:53:41 CET Marcus Schäfer wrote:
Hi,
It think you are missing "VMinstall: kernel-obs-build" in the prjconf.
Thanks, I tried that and it results in an unresolvable. It seems there is no kernel-obs-build for aarch64 available.
We have it on all of our current supported distros IIRC.
I wanted to understand the concept better and was reading more code. Please correct me when wrong. In the build package code there is:
/usr/lib/build/build-vm-kvm
and
---snip
armv8l|aarch64) kvm_bin="/usr/bin/qemu-system-aarch64" kvm_console=ttyAMA0 vm_kernel=/boot/Image vm_initrd=/boot/initrd test -e /boot/kernel.obs.guest && vm_kernel=/boot/kernel.obs.guest test -e /boot/initrd.obs.guest && vm_initrd=/boot/initrd.obs.guest
---snap
This is the issue we see. The worker does not provide /boot/kernel.obs.guest and /boot/initrd.obs.guest. The default settings for kernel and initrd applies but that doesn't work.
These are just fallbacks, but for a clean and reproducible build only the kernel-obs-build content should be used.
The suggested instruction:
VMinstall: kernel-obs-build
would install the package on the worker prior the build-vm-kvm code and this would lead to the use of obs.guest initrd/kernel files. Is that correct ?
it would unpack it during preinstall phase, and kvm would use it then.
Next I looked at one of my packages that builds successfully on build.opensuse.org. In the log I see:
[ 1s] unpacking preinstall image Ubuntu:debbuild/Ubuntu_20.04/preinstallimage-base [6fe2b94cfc6f3d08b3f38c628716bdc5] [ 4s] booting kvm...
ah, debian, no one has packaged a kernel-obs-build there yet. Should be done, but no one invested the time yet.
[ 4s] ### VM INTERACTION START ### [ 4s] Using UART console [ 4s] /usr/bin/qemu-system-aarch64 ... -kernel /boot/kernel.obs.guest -initrd /boot/initrd.obs.guest ...
There is no extra install of kernel-obs-build. Instead a preinstall image which probably provides the required data is used.
I see this behavior when building for Ubuntu using debbuild. Package builds for e.g Leap, Fedora on aarch64 seems to provide a kernel-obs-build package and contains:
[ 15s] [8/47] preinstalling kernel-obs-build...
In all of this situations the location for initrd and kernel is passed to kvm as
-kernel /var/cache/obs/worker/root_13/.mount/boot/kernel -initrd /var/cache/obs/worker/root_13/.mount/boot/initrd
Which was weird because I thought it would also be *.obs.guest
Can you advise what the preferred solution is to fix this OBS instance to allow package builds on the aarch64 workers ?
do not use *.obs.guest, it is more a hack and it means that the worker setup
has an influence on the build result. Also you would maybe need different
workers per distribution in future ...
--
Adrian Schroeter
Hi,
ah, debian, no one has packaged a kernel-obs-build there yet.
Ok, thanks
-kernel /var/cache/obs/worker/root_13/.mount/boot/kernel -initrd /var/cache/obs/worker/root_13/.mount/boot/initrd
Even without a kernel-obs-build there is a "boot/Image" and a "boot/initrd" file present in the BUILD_ROOT directory created by obs. In this example: [ 0s] Using BUILD_ROOT=/obsworker/worker/root_6/.mount and there are: /obsworker/worker/root_6/.mount/boot/Image /obsworker/worker/root_6/.mount/boot/initrd These files do not belong to any package and seems to be copied somehow when the buildservice runs init_buildsystem Can you help me to understand were they come from ? Thanks Regards, Marcus -- Public Key available via: https://keybase.io/marcus_schaefer/key.asc keybase search marcus_schaefer ------------------------------------------------------- Marcus Schäfer Am Unterösch 9 Tel: +49 7562 905437 D-88316 Isny / Rohrdorf Germany -------------------------------------------------------
On Mittwoch, 12. Januar 2022, 12:40:09 CET Marcus Schäfer wrote:
Hi,
ah, debian, no one has packaged a kernel-obs-build there yet.
Ok, thanks
-kernel /var/cache/obs/worker/root_13/.mount/boot/kernel -initrd /var/cache/obs/worker/root_13/.mount/boot/initrd
Even without a kernel-obs-build there is a "boot/Image" and a "boot/initrd" file present in the BUILD_ROOT directory created by obs. In this example:
[ 0s] Using BUILD_ROOT=/obsworker/worker/root_6/.mount
and there are:
/obsworker/worker/root_6/.mount/boot/Image /obsworker/worker/root_6/.mount/boot/initrd
These files do not belong to any package and seems to be copied somehow when the buildservice runs init_buildsystem
Can you help me to understand were they come from ?
they must be part of any package of your distro.
--
Adrian Schroeter
Hi,
and there are:
/obsworker/worker/root_6/.mount/boot/Image /obsworker/worker/root_6/.mount/boot/initrd
These files do not belong to any package and seems to be copied somehow when the buildservice runs init_buildsystem
Can you help me to understand were they come from ?
they must be part of any package of your distro.
Hmm, I was not able to identify any kernel package from the distro which would provide these files. The distro against the package is build is Debian. If I run strings /obsworker/worker/root_6/.mount/boot/Image | grep version I get information like this: Linux version 5.3.18-na120.2.1-default (geeko@buildhost) (gcc version 10.3.1 20210707 [revision 048117e16c77f82598fca9af585500572d46ad73] (SUSE Linux)) #1 SMP Wed Apr 28 10:54:41 UTC 2021 (ba3c2e9) It still feels like these files are taken from the worker(Leap 15.2) or from somewhere else. I cannot access the server (only the worker) and I don't understand were this kernel and initrd files comes from. Any idea ? Thanks Regards, Marcus -- Public Key available via: https://keybase.io/marcus_schaefer/key.asc keybase search marcus_schaefer ------------------------------------------------------- Marcus Schäfer Am Unterösch 9 Tel: +49 7562 905437 D-88316 Isny / Rohrdorf Germany -------------------------------------------------------
On Mittwoch, 12. Januar 2022, 18:22:05 CET Marcus Schäfer wrote:
Hi,
and there are:
/obsworker/worker/root_6/.mount/boot/Image /obsworker/worker/root_6/.mount/boot/initrd
These files do not belong to any package and seems to be copied somehow when the buildservice runs init_buildsystem
Can you help me to understand were they come from ?
they must be part of any package of your distro.
Hmm, I was not able to identify any kernel package from the distro which would provide these files. The distro against the package is build is Debian. If I run
strings /obsworker/worker/root_6/.mount/boot/Image | grep version
I get information like this:
Linux version 5.3.18-na120.2.1-default (geeko@buildhost) (gcc version 10.3.1 20210707 [revision 048117e16c77f82598fca9af585500572d46ad73] (SUSE Linux)) #1 SMP Wed Apr 28 10:54:41 UTC 2021 (ba3c2e9)
seems to be a suse kernel at least...
It still feels like these files are taken from the worker(Leap 15.2) or from somewhere else. I cannot access the server (only the worker) and I don't understand were this kernel and initrd files comes from.
Any idea ?
not really.
however, even when they exist there, they shouldn't be used. Can you check
your qemu cli?
Or can you create a reproducer somewhere in build.opensuse.org?
--
Adrian Schroeter
participants (3)
-
Adrian Schröter
-
Andreas Schwab
-
Marcus Schäfer