Hi, On Tue, 28 Feb 2006, Carlos E. R. wrote:
The Tuesday 2006-02-28 at 08:35 +0100, Eberhard Moenkeberg wrote:
Does downloading a dvd image have more impact of an ftp server than downloading (or installing) from the classical ftp tree, ie, each separate rpm? I mean, of course, with many simultaneous clients, as in gwdg.
Downloading a big image causes much less filesystem operations at the server than fetching a bunch of single files. At ftp.gwdg.de, just the filesystem I/O is the bottleneck.
I see.
I was wrong then, I thought that many people downloading the iso images would be heavier on you than the same users installing by ftp.
But usually at the home user side the network is the bottleneck, so it makes sense to use an external server as the installation source. This way, much less volume has to get transfered.
Yes, that part I knew ;-)
But if the servers have a high load, this "less volume" transfer may need longer than the "big volume", due to filesystem latencies.
Yes, that's easy to understand.
And I suppose that those programs that download big files retrieving diferent chunks (start points) at the same time from the same server makes things worse, specially for the rest of the users. :-?
I have seen that working, but I have always refused to do it myself.
I'm not clear what the real consequences for the server are, and they may be different between 2.4 and 2.6 kernels and between 32 and 64 bit systems. At least I tolerate such behaviour, thinking that there is a good chance that the inode data still are in cache when the second chunk gets requested, resulting in less filesystem I/O for "finding" the chunk. But with 32 bit arch and 2.4 kernels this probably is not very much the case, because the inode cache is very much limited in size (and more limited in SUSE kernels than in vanilla, to reduce the chance of hash collisions). But anyways, it is nothing else than 10 users fetching the same file at the same time, but somewhat "asynchronously". Cheers -e -- Eberhard Moenkeberg (emoenke@gwdg.de, em@kki.org)