Carlos E. R. wrote:
Not good enough. Say the kernel, it is downloaded simultaneously from 3 mirrors. A proxy cache would end storing 3 identical copies of the kernel, wasting space and download pipe. It does not recognize that they are the same thing.
And to be of use, if I download the same kernel in two weeks I want it to be cached. I could get MirrorBrain pointing me to 3 other mirrors and downloading it again 3 times more, not saving any resources. As it is, a download proxy would be worse than nothing.
We would need some kind of special proxy cache that gets the requests from zypper directly, then does the download in the same manner that zypper would do it, save it locally in an structure that mimics the upstream directories, and serve the requests to the local LAN zyppers or yasts. Agreed, a specialized solution would probably have the best possible hit rate, but it would need a lot of functionality which already exists elsewhere, e.g. some cleanup or age out mechanism to delete packages which are no longer needed.
Per managed to convince Squid to do the job, but he said it was not easy. He wrote about it somewhere.
Yes, I still think my solution is a little more complex than it ought to be, but for anyone doing regular installations on a slow(ish) link, I think it's worth the hassle. Back then I saw a 60% improvement, and we were still on 100Mbit ethernet. -- Per Jessen, Zürich (17.4°C) https://wiki.jessen.ch/index/How_to_cache_openSUSE_repositories_with_Squid -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org