Mailinglist Archive: opensuse-features (829 mails)

< Previous Next >
[openFATE 120326] Resume download
  • From: fate_noreply@xxxxxxx
  • Date: Sat, 4 Jul 2009 21:15:14 +0200 (CEST)
  • Message-id: <feature-120326-71@xxxxxxxxxxxxxx>
Feature changed by: Michal Papis (mpapis)
Feature #120326, revision 71
Title: Resume download

openSUSE-10.2: Rejected by Stanislav Visnovsky (visnov)
reject date: 2006-09-21 09:56:17
reject reason: Not enough resources to implement in time.
Priority
Requester: Desirable
Projectmanager: Desirable

openSUSE-10.3: Rejected by Stanislav Visnovsky (visnov)
reject date: 2007-07-25 15:20:32
reject reason: Out of time. Postponing.
Priority
Requester: Desirable
Projectmanager: Desirable

openSUSE-11.0: Rejected by Jiri Srain (jsrain)
reject date: 2008-03-28 13:51:03
reject reason: Out of resources for 11.0.
Priority
Requester: Desirable
Projectmanager: Important

openSUSE-11.1: Rejected by Stanislav Visnovsky (visnov)
reject date: 2008-07-01 11:34:46
reject reason: Postponing, needs downloading refactor.
Priority
Requester: Desirable

openSUSE-11.2: Evaluation
Priority
Requester: Desirable
Projectmanager: Desirable

Requested by: Klaus Kämpf (kwk)

Description:
YaST/YOU times out too easy when downloading large packages like kde-
base (14 MB) over a single ISDN connection. Please cache the half
downloaded package so I don't have to start from the beginning again.
See http://bugzilla.suse.de/show_bug.cgi?id=9740
(http://bugzilla.suse.de/show_bug.cgi?id=9740)
http://bugzilla.suse.de/show_bug.cgi?id=278507
(http://bugzilla.suse.de/show_bug.cgi?id=9740)

Discussion:
#1: Gerald Pfeifer (geraldpfeifer) (2006-06-30 17:40:31)
Klaus, do you now whether this is still an issue?

#2: Klaus Kämpf (kwk) (2006-06-30 18:38:23)
We still have very large packages (kernel, OpenOffice_org) which might
not download completely in one go.

#5: Milisav Radmanic (radmanic) (2006-09-08 15:25:57) (reply to #4)
this applies to the media manager and Jiri already agreed to return
this back to the YAST team, since Marius only helped out during CODE
10. Marius will of course help with the implementation, by sharing his
knowledge.

#7: Stanislav Visnovsky (visnov) (2007-11-23 10:32:46)
Related to commit-refactoring.

#9: Federico Lucifredi (flucifredi) (2008-06-12 20:21:33)
Klaus, are you still running ISDN? just kidding :-)
Stano, please your opinion on workload - is this easily achievable? if
not, there are hgher priorities.

#10: Stanislav Visnovsky (visnov) (2008-06-20 13:10:21) (reply to #9)
Jiri, could we get estimate for this?

#11: Jiri Srain (jsrain) (2008-08-04 08:59:34) (reply to #10)
Since curl itself does support resuming download, then this feature
should not be hard to be implemented.

#12: Ruchir Brahmbhatt (ruchir) (2009-01-17 12:07:25)
I also vote for this feature.

#13: Dmitry Mittov (michael_knight) (2009-01-19 09:01:09)
It is also a great problem when you use slow mirror. download.opensuse.
org redirects me to one of yandex mirrors (score 20). And I have
timeout on big packages.

#14: Piotrek Juzwiak (benderbendingrodriguez) (2009-01-21 18:17:13)
I'd vote at least for a way to change the timeout settings for YaST or
it doesn't solve the problem?

#15: Alam Aby Bashit (init7) (2009-01-22 09:35:17)
I'd like to vote this feature implemented in 11.2. You surely want to
have resume capability if you have unreliable yet slow internet
connection for say, updating KDE 4.2 :)

#16: Duncan Mac-Vicar (dmacvicar) (2009-01-22 15:14:44) (reply to #15)
Please stop this "I vote for this" or "+1" or "mee too" comments. There
is no voting system in FATE yet, but following a discussion about "I
want this too" makes hard to evaluate features.

#17: James Mason (bear454) (2009-01-24 06:05:01)
Could this be accomplished using a bittorrent backend instead of curl
?

#18: Jan Engelhardt (jengelh) (2009-01-30 15:31:33) (reply to #17)
ISDN is already slow as it is. I would not want to spend more time
downloading just because of the metadata traffic that is going to
happen. Not to mention what happens if there are no peers around or
they configured themselves to upload-limit themselves. Still, most
download.opensuse.org downloads are faster than a torrent for me.

#19: Pascal Bleser (pbleser) (2009-03-02 08:42:26)
Caching is one thing. But even using retries in curl would help, see
"curl --retries".

#20: Piotrek Juzwiak (benderbendingrodriguez) (2009-04-23 16:38:17)
It could be accomplished by using aria for downloading packages?
There would be no more problems with timeouts and bad checksums ?

#21: Ján Kupec (jkupec) (2009-04-23 17:36:18) (reply to #20)
Actually we are already using aria in current development branch, so
this is not so urgent anymore. Still, the download can be interrupted
also in other ways than connection timeout, e.g. user decision, sudden
power outage, etc...
Does anyone know whether aria supports resuming? (Implementing this for
the curl backend is not important anymore).

#22: Markus K (kamikazow) (2009-04-25 12:49:41) (reply to #21)
Yes, aria supports resuming -- even better than wget (don't know about
curl), because aria uses the file size to check whether the to-be-
downloaded file changed.

#23: Peter Poeml (poeml) (2009-04-28 09:14:12) (reply to #21)
It does. See section "Resuming Download" in its man page
(http://aria2.sourceforge.net/aria2c.1.html#_resuming_download) ; and
also note the -c option.

#24: T. J. Brumfield (enderandrew) (2009-06-12 23:35:43)
Perhaps as an addendum to this feature, I'd like to see the option to
set a number of automatic retries.
If Yast tells me it needs to download 3 gigs worth of packages, I don't
want to watch for one package to timeout, hanging the whole process up.
I'd like to configure it so that it will automatically retry the
package X times, then skip the package and move on.

+ #25: Michal Papis (mpapis) (2009-07-04 21:14:32) (reply to #24)
+ Good thing would be here to watch network status (maybe by ping'ing the
+ download server each few minutes, and resume download after it is back
+ again.



--
openSUSE Feature:
https://features.opensuse.org/120326

< Previous Next >
This Thread
  • No further messages