On Wed, Oct 1, 2014, at 02:31 PM, John Andersen wrote:
Perhaps your 15% failure rate is due to selecting the same repos for all your 200 machines scattered around the globe, conflicting with scheduled down time of the repositories or even down-time on each of your site's own networks.
The machines are configured with http://download.opensuse.org, with ZYPP_ARIA2C=1, not (necessarily) with a given repo URL. zypper's supposed to query the redirector links, and find/use a 'best' (by some criterion) repo. There's no downtime logged of any of my networks at any of the failures, also, each has fully redundant connectivity. If there were a network failure at the time of zypper (d)up, it'd fail for ALL the repos in, e.g. a refresh, not just one/some.
Maybe you would be better off running your own cloned repositories on your own network, or preceding your update attempt with a simple wget to see if the network is up and running and the XML file exists.
Yes, there are alternatives. Including other distros. I'm interested in the proper, standard function of zypper on openSUSE. In general, when repos are 'up', it works fine. It does NOT recover well, or at all, when an individual repo fails for whatever reason.
I've never been comfortable running anything from @openSUSE dev (!'home') repros, because the existance of those depends on the whim of that particular developer.
That's a choice. Not one that we make. I don't consider security:netfilter, nor any of the other non-'home' repos we use, to be managed 'on a whim'. As I'd previously mentioned this issue is NOT limited to non-distro repositories. In any case, it's irrelevant. How zypper fails/recovers should have absolutely no dependency on which repo it's failing on. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org