Hans-Peter Jansen wrote:
Am Sonntag, 16. Juni 2019, 10:12:42 CEST schrieb Per Jessen:
Carlos E. R. wrote:
Per managed to convince Squid to do the job, but he said it was not easy. He wrote about it somewhere.
Yes, I still think my solution is a little more complex than it ought to be, but for anyone doing regular installations on a slow(ish) link, I think it's worth the hassle. Back then I saw a 60% improvement, and we were still on 100Mbit ethernet.
I need to analyse these processes again, but as far as I remember, the problem with this setup is, that the more mirrors are active, the less effective it is. You end up with a lot of redundancy and will miss any connections via https (e.g. gwdg), which is a global trend.
I wonder if you may have misunderstood my solution. (apologies for going off-topic, maybe this is better places elsewhere). Redundancy - yes, when a complete file is fetched (to get it cached), that is redundant. With a clean cache, that does lead to up to twice the amount of traffic. On install#2, you will recoup that, and any subsequent install is for free. Less effective? No, I don't think so. All mirrors are mapped to one, never mind how many there might be. How do you see it becoming less effective? https - well, as long as our mirror infrastructure doesn't support https, it doesn't matter much what the global trend might be :-)
I've attempted to remove some redundancy with: https://github.com/frispete/squid_dedup
where the idea is to relocate CDN URLs to some (internal) common name. Subsequent accesses will find the objects, no matter which CDN URL they use (given, all possible CDN URLs are configured correctly).
So exactly what I do ? -- Per Jessen, Zürich (18.4°C) https://wiki.jessen.ch/index/How_to_cache_openSUSE_repositories_with_Squid -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org