Carlos E. R. wrote:
I don't think you can use those large transfer over internet. A local ethernet, I dunno.
***Usually***, one uses NFS/SMB on a local network. For the internet, one uses HTTP/FTP SSH...etc. While it is possible to configure NFS/SMB to work over the internet, they were designed for *local network file sharing*. How many Internet sites can you point at that allow the general public to connect/download with NFS or SMB?
- -- Cheers / Saludos,John Andersen wrote: On 6/5/2012 4:14 PM, Linda Walsh wrote:
A bit annoying at the ignorance shown by various engineers -- thinking that 4k writes are still optimal -- when I asked for a increased buffer size on Tbird (and FFox, I was told that the network only transfers a max of 1500 bytes/packet, so anything more than that was a waste.
Once it gets off your network, those assumptions are probably true. There was a series of article on Bufferbloat that brought this issue to light with regard to connections over the internet. http://gettys.wordpress.com/2010/12/03/introducing-the-criminal-mastermind-b...
---- I read those articles....while the articles made some sense, I'm not sure the problem was as bad as the article portrayed it. While you can't do 9K packets and might even be constrained to <1500 bytes if you are running IPV6 or in a VPN, it's still the case that a LARGE buffer of 256K-1M allowing a TCP window size of *similar*, is absolutely necessary for most network applications to get any speed. Look at your ping times say, google or youtube. I get relatively fast times (I think -- I've seen alot worse >50 on ISDN, 30-40 on DSL, seeing low-mid 20's on my cable). 20ms/packet means if you only send 1 packet out at a time (1500 bytes max), you'd get 50 packets in 1 second -- 50*1500 = 75,000 bytes(/1024) = 73.2KB/s. Most people would say that 73KB/s sucks for a download speed. 7-8 years ago commercial speeds in Europe to the home were up to 10-15Mb or 1.25-1.8MB/s. (25 times the 1 packet/ping time). Right now, I am in a slowish area and also can't pay premium, but get up to 22.5Mb down, which NOT counting overhead would get me about 2.5-2.8MB/s ... or about 35* what is a relatively low ping time. Now I know people out there who get 50-100Mb/s or 60-120MB/s... call it 73.2MB/s for round numbers. That's 1000 times faster than what you would get if you only send 1 packet at a time and wait for it's acknowledgment. To get those speeds, you have to be willing to send out 1000 packets, or 1500000 lets use rounding... 1.5MB buffers allowing for a 1.5MB sliding window size. So even over the internet, if you have a FAST connection, 2MB buffers wouldn't be unreasonable. At my piddly rate, a .5M buffer would be enough -- This is the main reason why ISP and others HAVE BUFFERS -- because of lame app writers as mentioned earlier. If the ISP's waited for the other end to acknowledge receipt, on 4K writes... I'd get 30th of my possible performance (this is a major reason why NFS and SMB are NOT used for wide area networks)... Lame app writers are more of a problem than buffer bloat. Check out your network activity with wireshark and check out how many apps use baby-writes. It was only a few years ago, that the linux 'cp' program still limited itself to 4k writes. I think it has improved. Now if you are writing to a LOCAL hard disk -- just 1 new Hard disk can exceed 100MB/s. So fragmentation and small writes can REALLY kill performance. If you are running a RAID or an SSD, speeds of 400-1024MB/s are not uncommon. You can't afford inter-packet latency -- at all!. Does that explain why buffers and read/write sizes NEED to be 1MB or more for most applications -- NOT that most apps can USE that (I mean how much space is consumed by an email sig? )... but for optimal speeds... larger buffs and i/o sizes are vital... -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org