On 15/06/2019 18.08, dieter wrote:
On Sat, 15 Jun 2019 12:46:14 +0200 Carlos E. R. wrote:
On 15/06/2019 10.52, dieter wrote:
On Sat, 15 Jun 2019 09:06:18 +0200 Hans-Peter Jansen wrote:
Hi,
Given the following scenario: a local LAN with a couple of Tumbleweed installations, and optionally a server.
I) a modification of the zypper download lib to honor an environment variable, e.g. ZYPPERCACHE, and relaying the downloads to that system (server), if set II) a zypper caching server, that uses the zypper download lib, but implicitly keeps the downloads for later reuse. I fully support the idea of a local caching mechanism.
But I am not convinced it is a good solution to pack this functionality into zypper. In principle "some" local caching (http) proxy on one of the systems would be sufficient for this purpose, and zypper on the different systems accesses it just by proxy settings.
The problem is that the actual download URL changes, depending on what the MirrorBrain answers each time is the best mirror. This makes using a proxy server more difficult. Yes you are right, the hit rate is not 100%. But when the update of all systems happens within a limited time frame then most of the packages are still fetched from the same download URL and therefore downloaded only once and then taken from the proxy cache.
Not good enough. Say the kernel, it is downloaded simultaneously from 3 mirrors. A proxy cache would end storing 3 identical copies of the kernel, wasting space and download pipe. It does not recognize that they are the same thing. And to be of use, if I download the same kernel in two weeks I want it to be cached. I could get MirrorBrain pointing me to 3 other mirrors and downloading it again 3 times more, not saving any resources. As it is, a download proxy would be worse than nothing.
We would need some kind of special proxy cache that gets the requests from zypper directly, then does the download in the same manner that zypper would do it, save it locally in an structure that mimics the upstream directories, and serve the requests to the local LAN zyppers or yasts. Agreed, a specialized solution would probably have the best possible hit rate, but it would need a lot of functionality which already exists elsewhere, e.g. some cleanup or age out mechanism to delete packages which are no longer needed.
Per managed to convince Squid to do the job, but he said it was not easy. He wrote about it somewhere.
It would be good for anyone maintaining several machines. Definitely.--
Cheers / Saludos, Carlos E. R. (from 15.0 x86_64 at Telcontar)