Hi Adrian, after deleting the cached packages, the same package now needed 23:26 Minutes for downloading. Looks like the situation improved a lot, considering that previously, I probably still had packages in the cache. Using mtr also shows much improved repsonse times. I did not measure before the move to prague, but it now looks like it can be used the way it is. Berthold On Wed, 04 Oct 2023 11:18:31 +0200 Adrian Schröter <adrian@suse.de> wrote:
Hello Berthold,
and in general osc and interconnect users.
We found an issue in haproxy running in front of OBS which caused hanging connections.
Could you confirm that any connection issues have improved now?
thanks adrian
On Freitag, 8. September 2023, 15:36:29 CEST Berthold Gunreben wrote:
On Fri, 08 Sep 2023 15:29:37 +0200 Adrian Schröter <adrian@suse.de> wrote:
On Freitag, 8. September 2023, 15:15:38 CEST Berthold Gunreben wrote:
Hi,
I just wonder if it is normal, that I currently have extremly slow bandwith to build.o.o when trying to do a openSUSE:Factory:zSystems testbuild with type qemu.
We are definitive not in the best shape atm performance wise, but not so bad as you are describing it atm.
The Package that I am interested in is python-onnx, and it is now downloading for 58 Minutes without any end in sight.
# time eosc getbinaries openSUSE:Factory:zSystems python-onnx standard s390x ... real 1m0,795s user 0m5,258s sys 0m0,495s
from my home DSL line. Yes, that is slow for these small rpms, but it is not 1hour ...
this is about # osc co openSUSE:Factory:zSystems/python-onnx && cd $_
which is kind of ok, but then:
# osc build --vm-type qemu standard s390x
and that is now doing
fetching packages for 'openSUSE:Factory:zSystems': | Elapsed Time: 1:19:17
(ok ... just finished now, so that is how long it took)
Typically, qemu builds are slow, but I guess building will be way faster than getting the packages for the build system.
Berthold