On Montag, 7. September 2020, 11:35:20 CEST TommyLike Hu wrote:
Hey, guys: I am an infrastructure member from openEuler community (openEuler is an open source, free Linux distribution platform. The platform provides an open community for global developers to build an open, diversified, and architecture-inclusive software ecosystem. [1]), we use OBS for our distribution packaging and releasing. since our worker doesn't utilize any virtualization technologies (at least in worker service, only chroot is used). we are wondering whether we could containerize our obs worker into kubernetes clusters. Within several hours of research, I found there is a guide [2] which utilizes the kubernetes device plugin and announces it's alpha implementation. I reckon this might be a correct direction but still have some questions. 1. Are there any known issues or disadvantages running an obs worker in a container?
you can run them in a container, but you most likely want to run a KVM build inside it. At least if you build untrusted code and/or want to ensure that the right kernel is used for the build target.
2. all of the container images are tagged with unstable (registry.opensuse.org/obs/server/unstable/container/leap151/containers/openbuildservice/backend for instance), do we have any official docker images? and where can I get the dockerfile of the image?
WIP, but I am happy to hear feedback about this one: # podman pull registry.opensuse.org/home/adriansuse/branches/opensuse/templates/images/15.2/containers/osc # podman run -ti --device /dev/kvm:/dev/kvm $IMAGE_ID that container is in first place for osc, but you should be able to install obs-worker inside as well and run it in kvm mode.
3. if we only containerize our obs worker that means we need to route requests from backend back to obs worker via nodeport, therefore we need to update the ip address of the worker to node's. Is there a way to achieve this?
the worker has to register with the address where it is reachable atm.
--
Adrian Schroeter