Mailinglist Archive: mirror (15 mails)

< Previous Next >
Re: Fwd: [mirror] Suboptimal (for mirrors) download pattern by opensuse clients
Hi,

Sorry for top posting but I just subscribed to this ML for this
conversation.

I'm currently working on the libzypp/zypper http/https media backend and
have
suspected something like that myself.

Currently we do not download multiple files at once but download one
file in multiple chunks.

I have a new downloader implemented that would support multiple
downloads in parallel but I suffer
from the problem that mirrorbrains forwards me from HTTPS to HTTP if I
disable the metalink downloads
in favour of downloading full files only, curl errors out in that case.

I know we could disable that error but I'm not really sure that this is
what we want. With metalink files
we get the metalink description over https , which includes all
checksums for the several chunks and then
can use http connections as well because we can check if we really got
what we asked for.

Would be interesting to know here if we can configure mirrorbrains to
just redirect to HTTPS if
that's what the incoming connection is using.

Not sure if we could drop Metalinks completely, however it makes the
code much more complex supporting it
and if it actually does more harm then good we should think about
something else.

As to the suggestion to use dynamic or bigger chunks: The metalink
description file we download at the beginning
has the list of chunks included and they are fixed. We probably could
try and download multiple chunks in the same request though.

Cheers,

Benjamin

On 3/2/20 3:04 PM, Michael Andres wrote:

-------- Forwarded Message --------
Subject: [mirror] Suboptimal (for mirrors) download pattern by opensuse
clients
Date: Fri, 21 Feb 2020 14:36:26 +0100 (CET)
From: Niklas Edmundsson <nikke@xxxxxxxxxx>
To: mirror@xxxxxxxxxxxx


Hi,

am I the only mirror admin that finds the current behavior of opensuse
clients suboptimal?

Requests by "ZYpp 17.11.4 (curl 7.60.0) openSUSE-Leap-15.1-x86_64" etc
seem to be done with 256 kb chunk size, always, as an example:

GET bytes=0-262143
/mirror/opensuse.org/tumbleweed/repo/oss/x86_64/libqt5-qtwebengine-5.14.1-1.5.x86_64.rpm
GET bytes=262144-524287
/mirror/opensuse.org/tumbleweed/repo/oss/x86_64/libqt5-qtwebengine-5.14.1-1.5.x86_64.rpm

That's a silly small size, since TCP won't be able to ramp window sizes
and get good speed before those 256k are done. Also, we get
int($filesize/256k) entries in our logs for each download.

To make matters worse, the thing seems to do some kind of round robin
between sites, with this pattern being the most ineffective looking from
a mirror admin standpoint:

GET bytes=2097152-2359295
/mirror/opensuse.org/tumbleweed/repo/oss/x86_64/libqt5-qtwebengine-5.14.1-1.5.x86_64.rpm
GET bytes=2621440-2883583
/mirror/opensuse.org/tumbleweed/repo/oss/x86_64/libqt5-qtwebengine-5.14.1-1.5.x86_64.rpm

Since the OS normally does read-ahead on file system reads, it will
read-ahead after byte 2359295 in preparation for the next read(). In
this case though, that's in vain as the request never comes but the next
data read is instead byte 2621440 and forward... OS read-ahead is most
commonly in the 64kB-1MB range, so it's not unlikely that the entire
256k gap inbetween is read from disk without being used...

Downloading files this way is just plain stupid, IMHO.

I don't know what problem this behavior is supposed so solve, but it's
definitely not beneficial for us as a mirror, and I think it's hurting
your end users as well.

If you want more bandwidth from us, request larger chunks (or whole
files). The TCP window will grow and you'll get the performance (within
the limits of 10 gigabit networking for one download).

If you want to spread the load between mirrors, use larger chunks, and
specifically avoid small chunks and striped access.

In any case, merge requests! If you're going to request a number of
consecutive chunks, do it in one request, preferrably as one range, to
make the most of the tcp connection you've set up.

My minimum suggestion would be to bump the chunk-size to multiple
megabytes at the minimum, possibly varying depending on download
performance, aiming for each GET taking at least a couple of seconds to
allow for TCP to ramp speed (and reduce the noise in our logs). In
extreme cases we're seeing multiple tens of GET:s each second for some
downloads, I'm guessing the rate throttles due to the RTT latency (ping
time) and not some real bandwidth limit...


/Nikke - admin of ftp.acc.umu.se

--
Benjamin Zeller <bzeller@xxxxxxx>
Systems Programmer

SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nuremberg, Germany
Tel: +49-911-74053-0; Fax: +49-911-7417755; https://www.suse.com/

(HRB 36809, AG Nürnberg)
Managing Director: Felix Imendörffer


--
To unsubscribe, e-mail: mirror+unsubscribe@xxxxxxxxxxxx
To contact the owner, email: mirror+owner@xxxxxxxxxxxx

< Previous Next >
List Navigation