Bjoern Voigt wrote:
What do you think, is a Squid caching proxy server still useful these days?
--- Depends on your usage & use case. For me, going back about 2000 log entries, I see (these are squid's mem and disk hits, not the browser's). Hits/Total Bytes/Total mem: 5% (88/1563) 4% (1.1M/27M) dsk: 9% (143/1563) 0% (156K/27M) tot: 14% (231/1563) 4% (1.3M/27M) Going back the whole day: Hits/Total Bytes/Total mem: 2% (433/14523) 4% (6.8M/146M) dsk: 7% (1148/14523) 6% (9.4M/146M) tot: 10% (1581/14523) 11% (16M/146M) When I do more web-browsing, especially news sites, I get more recent hits from memory. Bytes percentages take a real dive if there is streaming or downloading going on.
Personally I used the Squid caching proxy for SOHO networks on SuSE/openSUSE last time some years ago. Internet bandwidth is still limited and never enough. But I am unsure, if it's worth to setup Squid caching proxy servers today.
--- I usually setup my browsers to use no local disk cache -- relying on squid, instead. If you only have 1 browser on 1 OS, then maybe lower benefit.
I see a lot of factors, which make the acceleration effect of the caching proxy less effective:
* better Internet bandwidth for most users compared with former times
not really different here.
* busy caching proxy servers may slow down the traffic
Well, if you are adding a bottle-neck, sure, but then what's the point. Need to scale equipment and resources for the job.
* the share of dynamic content is high today
true, but depending on content, it may not really be dynamic.
* many content can't be cached, because it may be somehow private (HTTPS, authenticated, cookies ...) * even static content is often marked as dynamic and so it can't be cached (HTTPS, cookies, ...)
For the above --- can be controlled/managed if you have a squid proxy -- especially one that does SSL interception -- that lets you store ALOT more data, since there is alot of stuff that is cacheable (not dynamic), that is now https protected. But if you open the SSL layer, much becomes cacheable again. just have to be sure to put in exceptions for *real* https sites (not the "please protect me from seeing my own traffic but allow google to track it all and give it to govs that want it) like finance and such.
* CDNs cause the problem, that each user gets copies of the same content from different hosts
--- ??? Most should have similar URL's
* many requests (AJAX, REST ...) are pure dynamic and can not be cached
many requests of that nature still load images and scripts
* users do not expect proxy servers anymore and some browser apps (on mobile devices) can not deal with proxies
Um... I usually think of a squid-proxy serving computers on a LAN, not mobile devices.
* client-side caches in browsers may became better
---- Nope. They still use redundant disk space (if you let them), and have a limited memory cache that is flushed on restart (which you have to do periodically due to crashes, hangs or resource leaks. Only thing that allows multiple browsers on multiple logins on the same computer to share content is squid -- let alone sharing content w/other browsers+users on other computers.
I see Squid proxies or other proxies (caching or not) still useful for special use-cases:
* other security related use-cases: proxy-only Internet access, ...
--- One sets up "proxy-only" access to force the use of a proxy -- not the other way around. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org