Hi Katarina, I have quite some input on this, which I hope is useful to you. On Thu, Jan 22, 2009 at 12:21:56PM +0100, Katarina Machalkova wrote:
Hi, zypp hackers :)
When it comes to $http_proxy * & friends handling, libzypp uses a mix of two or even three approaches:
Note that there is a fourth scenario now, when aria2 is used as downloader. It honours http_proxy, but it can also accept command line arguments which may or may not be useful for libzypp.
* it honors $http_proxy from the environment, probably because libcurl does so and libzypp explicitely does not unset this variable * in addition to that, it reads /root/.curlrc and gets proxy URL (optionally also user/passwd) from there * last, but not least, proxy can be provided as a part of repository URL
Part of repository URL - how does that work? I never saw that.
I'm wondering what is the incentive behind combining all these three approaches, instead of e.g. reading only $http_proxy (curl can already accept URL in the form of http://user:pass@some.proxy.url so we don't necessarily need .curlrc for reading this data - is .curlrc used because we can't effectively setenv. during installation?).
Or alternatively, libzypp could read only .curlrc, as it contains all the necessary information and ignore $http_proxy completely. Reading recent user complaints on Factory, it would be much more comfortable for those using GUI package managers, as they wouldn't have to re-login in order to apply their proxy setting
It is not obvious to a system admin that /root/.curlrc is used by libzypp. I used it for other purposes in the past. It is a global configuration also used by the cURL commandline tool. Neither it is obvious that that file needs to be created or modified. Only people knowing curl/libcurl in some more depth will be aware of it. I suggest one of these paths: 1) Simply honour the environment variables. Possibly allow an optional way to override the system-wide configuration in zypp's own configuration, if necessary. Possibly per-server. Thus, the insufficiencies that Linux users have to live with will stay the same. 2) Implement a global proxy setting which is location-aware, e.g. by creating /etc/http_proxy and changing it on the fly when the network location is changed. Something along the lines of [1] or [2] would do the job. It would enable changed settings in all newly opened terminal windows / new login shells, which might be good enough. 3) Change all important small apps to honour /etc/http_proxy, in addition to $http_proxy, could be a next step. Obviously this involves talking to people and convincing them with patches. But it would be the best to solve this long-term (decade long!) problem. 4) Running a local proxy could do the job transparently. Per environmental setting of http_proxy=http://localhost:.../ all applications would pass this local intermediary. A system-wide proxy setting (or change of setting) could be achieved by reconfiguring that proxy. It doesn't need to do any caching, just talking to the respective upstream proxy and pass stuff on. Squid is a general-purpose proxy software which can be used for this. Alternatively, a small proxy could be written that accomplishes just this task, and could be used instead. Obviously, caching could be useful as well; "web accelerators" are a common feature that people are willing to pay real money for. (I used squid a long time for this purpose on a roaming machine.) Reconfiguration needs modification of squid.conf, which is ugly, but that could of course be changed. As additional feature, squid could be enhanced to parse proxy WPAD autoconfiguration [3] (needs a rudimentary Javascript interpreter). Thus, all applications could benefit from this way (or other ways) of proxy autodiscovery. Primarily, the advantage of an intermediary proxy would be that no changes to existing applications would be required, and roaming can be supported cleanly. Misconfigured roaming machines (by lack of environmental awareness) are the primary cause of network noise and failed flows in networks. A study [4] found that one third of traffic in/out laptops is failed flows. This is partly due to browsers or other applications trying to contact servers at wrong IP addresses or wrong proxy servers. IMO, 3) and 4) are especially worth pursuing. Peter [1] http://www.cs.usyd.edu.au/%7Emassad/project-proxy-config.html [2] http://tomayko.com/writings/os-x-network-location-support-from-the-command-l... [3] http://en.wikipedia.org/wiki/Proxy_auto-config [4] "How Healthy are Today's Enterprise Networks?" Guha et al., 2008 ACM SIGCOMM Internet Measurement Conference -- Contact: admin@opensuse.org (a.k.a. ftpadmin@suse.com) #opensuse-mirrors on freenode.net Info: http://en.opensuse.org/Mirror_Infrastructure SUSE LINUX Products GmbH Research & Development