On Sun, Jul 25, 2010 at 07:32:54PM +0200, Keld Simonsen wrote:
My machines there are on the Danish research network, and close to the Danish Internet exchange (DIX) - some 100 meters from it. We have 10 Gbit on the university campus, but not (yet) to our machines. I would think there is good connection to German research network - I would think it would be a multi gigabit connection. It should be good for multi 100 mbit/s traffic. - not just 10 Mbit.
Should, yes. In reality it is not.
No, there are not real limits. In my tests it is possible to get 1 GBit/sec with many machines at the same time.
We are talking about 2 things here:
- the total bandwidth consumed at a server machine
- the obtainable bandwidth for an extra machine (a user).
The server is able to provide at least 8 GBit/sec, we tested that. The university network currently is limited to about 400.000 packets/sec. Here we still need to experiment. Right now, and I guess all around the clock, I can get several GBit/sec from the machine when downloading some file. I don't see when/why this should not be the case (apart from the 400kp/s issue).
2010-07-25 20:05:12 (111 MB/s) - `/dev/null' saved [4697126912/4697126912]
If you find a machine with 10 GBit/sec or more, a decent connection to our university and the 400kp/s issue is not there, I guess you can download with 5 GBit/sec and more. I can't test this, though.
Do you get 1 Gbit/s from a testing client machine towards your server? Do you get it at this time (Sunday afternoon)?
Yes. I get 1 GBit/sec each for several machines, if I want to. As said, we reached 8 GBit/sec already.
Hmm, I think you should not try benchmarking from your ftp machine, it could be that your file systems are very busy. I think your tests on your own machine was conducted on a client machine?
I downloaded to /dev/null, which is fast. Besides, we have a very fast file system and a lot of RAM to cache the download. This is clearly not the issue. The other tests were taken using other machines in the university, yes. A local download currently gives 8 GBit/sec:
2010-07-25 20:08:15 (1.01 GB/s) - `/dev/null' saved [4697126912/4697126912]
Hmm, how do I control the peer list to be just from your ftp site?
You don't. Instead you just look at the number for our server and ignore the others (and the total).
So you are just part of the general opensuse or ubuntu seed? That should not be too difficult for me to do also.
Yes, that is how BitTorrent works. One huge swarm for a single file.
I don't know how well equipped things are here, I am not in charge of routing. Anyway I would expect it to be quite standard, they run a 10 Gbit backbone at the university campus, and they are hosting the national internet exchange, wgeer they then also have a number of international lines. Are you talking about international connection bandwidth?
Yes, since we are in Germany and you are in Denmrak.
I do get fine speeds from other servers in Denmark, say 200 - 300 Mbit/s.
Denmark is not Germany. Try other servers in Germany. If you can find _any_ with a decent performance, we can start thinking about problems at our end.
So you are saying that the bottlenecks are not the ftp servers, but rather the national infrastructure, at least in some cases. This could give med some insights in what mirrors to chose for rsyncing, eg. I should prefer Danish or Nordic servers to European servers. And BT could also give and advantage.
This also have inpact on normal users, many users have 20 - 50 - 100 Mbit/s download connectivity, and having only 10 Mbit/s because of national infrastructure issues could make the users want to have particular priorities.
I would not say that the connection between Denmark and Germany is 10 MBit/sec. But when you want to download something, you need to think of those limitations.
So what are the main types of traffic for you? 30 TB a day is almost 3 gigabit...
It is. In this case this was the automatic update for Firefox 3.6.7, so roughly 20% of all (European?) Firefox users downloaded from our server. This is a lot and in average we had 3 GBit/sec.