[zypp-devel] [SoC-student] libzypp HTTP download failover
Hello :-) My name is Gerard Farràs, I am a graduate student in Computer Science at the UOC [1], in Barcelona, and I am interested, within the framework of the Google Summer code [2] in developing the project "Concept for libzypp doing failover when downloading packages from download.opensuse.org "[3]. Do you know if there are already many students interested in this project? Because, if any, I can even sign up in some other project... Moreover, I have already begun installing the development environment and doing a look at the code: Is this list the most appropriate for the doubts that arise me? Thank you! [1] http://www.uoc.edu [2] http://code.google.com/soc/2008/ [3] http://en.opensuse.org/Libzypp/Failover -- To unsubscribe, e-mail: zypp-devel+unsubscribe@opensuse.org For additional commands, e-mail: zypp-devel+help@opensuse.org
Gerard Farràs i Ballabriga wrote:
Hello :-)
My name is Gerard Farràs, I am a graduate student in Computer Science at the UOC [1], in Barcelona, and I am interested, within the framework of the Google Summer code [2] in developing the project "Concept for libzypp doing failover when downloading packages from download.opensuse.org "[3].
Do you know if there are already many students interested in this project? Because, if any, I can even sign up in some other project...
Moreover, I have already begun installing the development environment and doing a look at the code: Is this list the most appropriate for the doubts that arise me?
Thank you!
[1] http://www.uoc.edu [2] http://code.google.com/soc/2008/ [3] http://en.opensuse.org/Libzypp/Failover
If you sign up, I will gladly assist for the libzypp part, together with Peter for the download.opensuse.org. Yes, this list is a good place. Saludos Duncan -- To unsubscribe, e-mail: zypp-devel+unsubscribe@opensuse.org For additional commands, e-mail: zypp-devel+help@opensuse.org
On Fri, Mar 28, 2008 at 06:26:07PM +0100, Duncan Mac-Vicar Prett wrote:
If you sign up, I will gladly assist for the libzypp part, together with Peter for the download.opensuse.org.
Great! Thank you very much, Duncan!
Yes, this list is a good place.
Saludos Duncan
Peter -- "WARNING: This bug is visible to non-employees. Please be respectful!" SUSE LINUX Products GmbH Research & Development
Hi, Gerard Farràs i Ballabriga wrote:
Hello :-)
My name is Gerard Farràs, I am a graduate student in Computer Science at the UOC [1], in Barcelona, and I am interested, within the framework of the Google Summer code [2] in developing the project "Concept for libzypp doing failover when downloading packages from download.opensuse.org "[3].
Thanx for the interest! The project proposal is interesting, but IMO can't be implemented as proposed now. It needs to be discussed. Just a quick thought. Two things that cross my mind are: 1) the idea of downloading & parsing a mirror list for each file doesn't sound appealing to me. - downloading a mirror list for files as small as $repo/media.1/media is pointless - it would be fine if the fetching of the mirror list happens only in case of error, BUT this is also not easy - an error can occur outside of the media back-end at various places (e.g. checksum failure is something which is handled outside of the media back-end - in the Fetcher) 2) the feature is specific to downloads.opensuse.org (for now) - we would need to hardcode a is_download_opensuse_org condition to avoid useless requests for other URLs (or introduce a mechanism to query for availability of such capability and check that when starting zypp). This would not apply if the mirror list would be requested and processed only on errors. There may be other issues, too. Opinions?
Moreover, I have already begun installing the development environment and doing a look at the code: Is this list the most appropriate for the doubts that arise me?
Yes, you're in the right place. Feel free to ask. Cheers, jano
Thank you!
[1] http://www.uoc.edu [2] http://code.google.com/soc/2008/ [3] http://en.opensuse.org/Libzypp/Failover
-- To unsubscribe, e-mail: zypp-devel+unsubscribe@opensuse.org For additional commands, e-mail: zypp-devel+help@opensuse.org
On 28/03/2008, Jan Kupec
Just a quick thought. Two things that cross my mind are: 1) the idea of downloading & parsing a mirror list for each file doesn't sound appealing to me. - downloading a mirror list for files as small as $repo/media.1/media is pointless
I wouldn't say pointless, but undesirable. If the primary mirror is not available then there needs to be possibility to fall back to a mirror. Metadata is the most important to have failover for. Obviously downloading and parsing a file list for every request is not sensible, but perhaps it could be done at repository level rather than file level to provide a list of candidate mirrors.
2) the feature is specific to downloads.opensuse.org (for now) - we would need to hardcode a is_download_opensuse_org condition to avoid useless requests for other URLs (or introduce a mechanism to query for availability of such capability and check that when starting zypp). This would not apply if the mirror list would be requested and processed only on errors.
One of the causes of errors is the redirector itself being unavailable, at which point it is impossible to query the mirrorlist. So querying mirror list only on error doesn't help much. Querying on repository add, and again on error might be an option. -- Benjamin Weber -- To unsubscribe, e-mail: zypp-devel+unsubscribe@opensuse.org For additional commands, e-mail: zypp-devel+help@opensuse.org
On Fri, Mar 28, 2008 at 07:08:49PM +0000, Benji Weber wrote:
On 28/03/2008, Jan Kupec
wrote: Just a quick thought. Two things that cross my mind are: 1) the idea of downloading & parsing a mirror list for each file doesn't sound appealing to me.
[...] Obviously > downloading and parsing a file list for every request is not sensible, [...]
It seems that you (and Jan) think that the parsing might be complicated. In fact, this is not the case. It requires nothing else than reading the first line from the mirror list, and extracting the URL from it. The mirror list is already sorted. The URL is the ready-made URL to go to. Just like the client reads the server reply headers now, sees the Location: header, grabs the URL from that and follows it, it would read the mirror list, grab the URL from the first line, and follow it. That's really all there is to it. In case of an error then, it can simply try the next one. You see? (Of course there are more sophisticated things that the client *could* do also, but that's all optional, and not required.) Peter -- "WARNING: This bug is visible to non-employees. Please be respectful!" SUSE LINUX Products GmbH Research & Development
Hi, On Fri, Mar 28, 2008 at 06:26:59PM +0100, Jan Kupec wrote:
Gerard Farràs i Ballabriga wrote:
Hello :-)
My name is Gerard Farràs, I am a graduate student in Computer Science at the UOC [1], in Barcelona, and I am interested, within the framework of the Google Summer code [2] in developing the project "Concept for libzypp doing failover when downloading packages from download.opensuse.org "[3].
Thanx for the interest!
The project proposal is interesting, but IMO can't be implemented as proposed now. It needs to be discussed.
It definitely needs discussion and further refinement -- that's why I posted it here -- and I'm thankful for your input.
Just a quick thought. Two things that cross my mind are: 1) the idea of downloading & parsing a mirror list for each file doesn't sound appealing to me.
Parsing the mirror list in the client is an affordable effort, in the context of a network-bound operation. Each file download involves an HTTP request anyway. Just think that the download server itself is able to do the same a 1000 times per second. The client will never download more than a few files per second even from a local server. Remember that a client's download request involves typically more than a simple HTTP request anyway. Typically it is a name lookup, HTTP request, which is parsed and typically consists of a HTTP redirect, which results in a second name lookup, and HTTP request. What would be different to today is that the client chooses the mirror itself, instead of the server choosing it. In addition, there is a number of reasons to do all this on file level. Only few mirrors are complete + up to date in all regards, and we are working with highly dynamic repositories (like KDE from the buildservice) as well as well with the classic, more static ones (like 10.3 repo). The static ones, that many of you guys are familiar with, are only one part of what we deal with today. I could tell you pretty exactly which files have short turnaround times, and in which ways the client "breaks" if it gets outdated files (and therefore an inconsistent state. I have seen all the bugs resulting from it. And I have tuned the cache control headers which we server to take it into account. Therefore, the cost of parsing a mirror list per request seems absolutely reasonable to me, considering what the client can do with it. It isn't much more work than parsing the HTTP redirect, anyway ;) I'm open to be convinced of anything else. And I'm grateful for your input!
- downloading a mirror list for files as small as $repo/media.1/media is pointless
I don't agree with this -- on the contrary, the client needs _all_ files for correct operation, and a way to fall back for each of them. It is independent of file size. You may suggest to do all this on directory level. The problem though is manifold. - not every mirror carries all parts of a repository. Think of a mirror that excludes debuginfos, ppc, or sources when mirroring. In fact, mirrors do that, will do it and must do that because our repositories are simply too large. - repositories change over time -- and some do often. Only because rpm filenames change with each rebuild are we able to redirect for those at all. We would _not_ be able to redirect for the metadata at all -- and if fact we don't. There is no efficient way to make sure that we know when those files have been updated on a mirror. - We like to keep file level requests to the download server because it gives us insight in repository usage (statistics) In the presentation I gave on the FOSDEM I went into some more detail on this, and why it is important. http://www.poeml.de/~poeml/talks/redirector/
- it would be fine if the fetching of the mirror list happens only in case of error, BUT this is also not easy - an error can occur outside of the media back-end at various places (e.g. checksum failure is something which is handled outside of the media back-end - in the Fetcher)
This is an interesting idea -- I need to think about it. I believe it would only make things more complicate. In addition, I believe we would lose some interesting possibilities that my proposal would give us. The idea is to save all base URLs (the part which points to the repository toplevel directory) would be saved by the client. Thereby it would accumulate a list of those base URLs. This can enable the client to try them autonomously, should the redirector itself be unreachable. Being able to continue to work if the redirector can't be reached is an essential part of the proposal. Your concern about the handling of checksums is valid and important. I suggest that the client blacklists a mirror which returned a "broken" file, for the duration of the "session". (Every mirror has an ID and an identifier string, which could be attached to the locally cached object, which could be used for blacklisting the mirror on retrying.) BTW: Checksumming *could* be done at lower level (with each file request) *if* the mirrorlist would be metalinks (http://metalinker.org), or have similar capabilities. Those contain checksums which can be used to ensure transfer integrity. I'm contemplating about adding metalink support to the redirector and whether that could be a way to achieve the goal we are discussing here. There is a number of clients out there which understand metalinks, and that would help for iso downloads just as well -- not only the specialized libzypp client. This is an interesting area which calls for exploration.
2) the feature is specific to downloads.opensuse.org (for now) - we would need to hardcode a is_download_opensuse_org condition to avoid useless requests for other URLs (or introduce a mechanism to query for availability of such capability and check that when starting zypp). This would not apply if the mirror list would be requested and processed only on errors.
It is not necessary to have such a hard-coded condition. The client can indicate (in the HTTP request) with an HTTP/1.1 Accept header that it is able to accept a mirror list, instead of file or a redirect to the file. (Older clients would continue to work.) The server can then reply with a list to those clients that send that Accept header. The client will be able to tell by the MIME type if it got a mirror list or a file. (And of course it will still transparently follow redirects, regardless.) Thanks, Peter -- "WARNING: This bug is visible to non-employees. Please be respectful!" SUSE LINUX Products GmbH Research & Development
Dr. Peter Poeml wrote:
It definitely needs discussion and further refinement -- that's why I posted it here -- and I'm thankful for your input.
I'm sorry i somehow overlooked this thread you started a month ago: http://lists.opensuse.org/zypp-devel/2008-03/msg00020.html
Just a quick thought. Two things that cross my mind are: 1) the idea of downloading & parsing a mirror list for each file doesn't sound appealing to me.
Parsing the mirror list in the client is an affordable effort, in the context of a network-bound operation. Each file download involves an HTTP request anyway.
Just think that the download server itself is able to do the same a 1000 times per second. The client will never download more than a few files per second even from a local server.
Remember that a client's download request involves typically more than a simple HTTP request anyway. Typically it is a name lookup, HTTP request, which is parsed and typically consists of a HTTP redirect, which results in a second name lookup, and HTTP request.
What would be different to today is that the client chooses the mirror itself, instead of the server choosing it.
In addition, there is a number of reasons to do all this on file level. Only few mirrors are complete + up to date in all regards, and we are working with highly dynamic repositories (like KDE from the buildservice) as well as well with the classic, more static ones (like 10.3 repo). The static ones, that many of you guys are familiar with, are only one part of what we deal with today.
I could tell you pretty exactly which files have short turnaround times, and in which ways the client "breaks" if it gets outdated files (and therefore an inconsistent state. I have seen all the bugs resulting from it. And I have tuned the cache control headers which we server to take it into account.
Therefore, the cost of parsing a mirror list per request seems absolutely reasonable to me, considering what the client can do with it.
It isn't much more work than parsing the HTTP redirect, anyway ;)
Nice. I wasn't aware this was true :O) Additionaly, you said in some other mail in this thread that the mirror list would be sorted and that libzypp just needs to take the first one if everything goes well and fall back to the next on error (maybe i overlooked this in your proposal on the wiki?). *That* sounds really good.
I'm open to be convinced of anything else. And I'm grateful for your input!
- downloading a mirror list for files as small as $repo/media.1/media is pointless
I don't agree with this -- on the contrary, the client needs _all_ files for correct operation, and a way to fall back for each of them. It is independent of file size.
My point was that these files could always be fetched right from the donwloads.opensuse.org and never redirected/requested from mirror. The drawback would be that this wouldn't cover the outage of the downloads.o.o.
You may suggest to do all this on directory level. The problem though is manifold.
- not every mirror carries all parts of a repository. Think of a mirror that excludes debuginfos, ppc, or sources when mirroring. In fact, mirrors do that, will do it and must do that because our repositories are simply too large. - repositories change over time -- and some do often. Only because rpm filenames change with each rebuild are we able to redirect for those at all. We would _not_ be able to redirect for the metadata at all -- and if fact we don't. There is no efficient way to make sure that we know when those files have been updated on a mirror. - We like to keep file level requests to the download server because it gives us insight in repository usage (statistics)
In the presentation I gave on the FOSDEM I went into some more detail on this, and why it is important. http://www.poeml.de/~poeml/talks/redirector/
- it would be fine if the fetching of the mirror list happens only in case of error, BUT this is also not easy - an error can occur outside of the media back-end at various places (e.g. checksum failure is something which is handled outside of the media back-end - in the Fetcher)
This is an interesting idea -- I need to think about it. I believe it would only make things more complicate.
Given the additional info you mentioned, i agree.
In addition, I believe we would lose some interesting possibilities that my proposal would give us. The idea is to save all base URLs (the part which points to the repository toplevel directory) would be saved by the client. Thereby it would accumulate a list of those base URLs. This can enable the client to try them autonomously, should the redirector itself be unreachable.
Being able to continue to work if the redirector can't be reached is an essential part of the proposal.
Your concern about the handling of checksums is valid and important.
I suggest that the client blacklists a mirror which returned a "broken" file, for the duration of the "session". (Every mirror has an ID and an identifier string, which could be attached to the locally cached object, which could be used for blacklisting the mirror on retrying.)
BTW: Checksumming *could* be done at lower level (with each file request) *if* the mirrorlist would be metalinks (http://metalinker.org), or have similar capabilities. Those contain checksums which can be used to ensure transfer integrity. I'm contemplating about adding metalink support to the redirector and whether that could be a way to achieve the goal we are discussing here. There is a number of clients out there which understand metalinks, and that would help for iso downloads just as well -- not only the specialized libzypp client.
This is an interesting area which calls for exploration.
Interesting indeed. Something like this could replace the need to store checksums into the metadata.
2) the feature is specific to downloads.opensuse.org (for now) - we would need to hardcode a is_download_opensuse_org condition to avoid useless requests for other URLs (or introduce a mechanism to query for availability of such capability and check that when starting zypp). This would not apply if the mirror list would be requested and processed only on errors.
It is not necessary to have such a hard-coded condition. The client can indicate (in the HTTP request) with an HTTP/1.1 Accept header that it is able to accept a mirror list, instead of file or a redirect to the file. (Older clients would continue to work.)
My idea was the other way around - the server would indicate that it can provide a mirror list. If not, the client would the old way of fetching files from that server throughout the session. Would this be possible? Cheers, jano
The server can then reply with a list to those clients that send that Accept header. The client will be able to tell by the MIME type if it got a mirror list or a file. (And of course it will still transparently follow redirects, regardless.)
-- To unsubscribe, e-mail: zypp-devel+unsubscribe@opensuse.org For additional commands, e-mail: zypp-devel+help@opensuse.org
On Thu, Apr 03, 2008 at 01:07:26PM +0200, Jan Kupec wrote:
Therefore, the cost of parsing a mirror list per request seems absolutely reasonable to me, considering what the client can do with it.
It isn't much more work than parsing the HTTP redirect, anyway ;)
Nice. I wasn't aware this was true :O) Additionaly, you said in some other mail in this thread that the mirror list would be sorted and that libzypp just needs to take the first one if everything goes well and fall back to the next on error (maybe i overlooked this in your proposal on the wiki?). *That* sounds really good.
Exactly. I should update the proposal to make that clearer. (I'm doing so right now.)
- downloading a mirror list for files as small as $repo/media.1/media is pointless
I don't agree with this -- on the contrary, the client needs _all_ files for correct operation, and a way to fall back for each of them. It is independent of file size.
My point was that these files could always be fetched right from the donwloads.opensuse.org and never redirected/requested from mirror. The drawback would be that this wouldn't cover the outage of the downloads.o.o.
Ah, I see. That's true. In fact, the redirector is able to make an exception for those smaller files (ZrkadloMinSize), although we don't actually use it. For small files, it is as cheap to return the file than to look up mirrors in the database and return a redirect or a mirror list. Or cheaper. So far, ZrkadloMinSize is 0 in our setup, so this remains as headroom for further scalability. To be honest, I didn't use it so far, because I reckon that the consistency that the client is going to see might be marginally higher _without_ that exception. But thinking about it, we have reached a high level of correctness now, so we should be able to run with it just as fine (and should try it).
You may suggest to do all this on directory level. The problem though is manifold. [...] Given the additional info you mentioned, i agree.
Fine!
BTW: Checksumming *could* be done at lower level (with each file request) *if* the mirrorlist would be metalinks (http://metalinker.org), Interesting indeed. Something like this could replace the need to store checksums into the metadata.
I it could only supplement it. The metadata needs to be verifiable for other clients which are not metalink-enabled, so it needs to contain the checksums. And libzypp itself needs to be able to use it like today if the server doesn't reply with a metalink. The main advantage for libzypp would be that it is able to detect a "broken" transfer much earlier (and actually fix it already during download).
2) the feature is specific to downloads.opensuse.org (for now) - we would need to hardcode a is_download_opensuse_org condition to avoid useless requests for other URLs (or introduce a mechanism to query for availability of such capability and check that when starting zypp). This would not apply if the mirror list would be requested and processed only on errors.
It is not necessary to have such a hard-coded condition. The client can indicate (in the HTTP request) with an HTTP/1.1 Accept header that it is able to accept a mirror list, instead of file or a redirect to the file. (Older clients would continue to work.)
My idea was the other way around - the server would indicate that it can provide a mirror list. If not, the client would the old way of fetching files from that server throughout the session. Would this be possible?
Possible, yes, but since - HTTP is stateless and can be intercepted by intermediate caches - mirrorlists are valid only per file (don't forget this), - the client typically works with more than one repository, possibly hosted on different servers, - the server might have the _ability_ to send mirror lists, but it might not want to actually do that for files that the client will request (because it wants the client to deliver the file on its own, or there is no known mirror), I suggest to keep this per request. Not per session. It is most flexibel and also most simple in my opinion. Since the client is the one who initiates all communication, it also saves an additional request, if the client is the one who indicates its willingness to accept a mirror list. The client needs to be able to handle three possible cases: - 200 OK, Content-Type != application/mirrorlist: receive the file. - 200 OK, Content-Type == application/mirrorlist: follow first URL. - 302 Found: follow the Location header (standard redirect). That's assuming a healthy redirector. To handle a non-reachable redirector (or one returning garbage), it should: - in case of failure (timeout, garbage), use one of the cached baseurls Peter -- "WARNING: This bug is visible to non-employees. Please be respectful!" SUSE LINUX Products GmbH Research & Development
On Thu, Apr 03, 2008 at 02:02:35PM +0200, Dr. Peter Poeml wrote:
On Thu, Apr 03, 2008 at 01:07:26PM +0200, Jan Kupec wrote:
Therefore, the cost of parsing a mirror list per request seems absolutely reasonable to me, considering what the client can do with it.
It isn't much more work than parsing the HTTP redirect, anyway ;)
Nice. I wasn't aware this was true :O) Additionaly, you said in some other mail in this thread that the mirror list would be sorted and that libzypp just needs to take the first one if everything goes well and fall back to the next on error (maybe i overlooked this in your proposal on the wiki?). *That* sounds really good.
Exactly.
I should update the proposal to make that clearer. (I'm doing so right now.)
http://en.opensuse.org/Libzypp/Failover is now updated. Peter -- "WARNING: This bug is visible to non-employees. Please be respectful!" SUSE LINUX Products GmbH Research & Development
On Fri, Mar 28, 2008 at 09:53:52PM +0100, Dr. Peter Poeml wrote:
BTW: Checksumming *could* be done at lower level (with each file request) *if* the mirrorlist would be metalinks (http://metalinker.org), or have similar capabilities. Those contain checksums which can be used to ensure transfer integrity. I'm contemplating about adding metalink support to the redirector and whether that could be a way to achieve the goal we are discussing here. There is a number of clients out there which understand metalinks, and that would help for iso downloads just as well -- not only the specialized libzypp client.
This is an interesting area which calls for exploration.
Yesterday, metalink support in the redirector became a reality. *However*, while working with the metalink format it became clear to me that is more desirable *for libzypp* to have a its own, text-only, format. This in order to avoid dependencies on a library parsing XML, and avoid the work of parsing altogether. As indicated in an earlier mail, it is totally appropriate and sufficient if libzypp just reads the first line of the mirror list and follows that URL. But if you say, libzypp has XML support anyway (I suppose it would have, in order to be able to use repo-md repositories), metalink support is (nearly) there.
2) the feature is specific to downloads.opensuse.org (for now) - we would need to hardcode a is_download_opensuse_org condition to avoid useless requests for other URLs (or introduce a mechanism to query for availability of such capability and check that when starting zypp). This would not apply if the mirror list would be requested and processed only on errors.
It is not necessary to have such a hard-coded condition. The client can indicate (in the HTTP request) with an HTTP/1.1 Accept header that it is able to accept a mirror list, instead of file or a redirect to the file. (Older clients would continue to work.)
The server can then reply with a list to those clients that send that Accept header. The client will be able to tell by the MIME type if it got a mirror list or a file. (And of course it will still transparently follow redirects, regardless.)
This is prabably also the same way that metalink support is going to work, for enabled clients. Metalink-enabled clients will indicate their ability, and get a metalink, or the file. Other clients will get a redirect, or the file. Peter -- "WARNING: This bug is visible to non-employees. Please be respectful!" SUSE LINUX Products GmbH Research & Development
Hi, On Thu, 3 Apr 2008, Dr. Peter Poeml wrote:
Yesterday, metalink support in the redirector became a reality.
*However*, while working with the metalink format it became clear to me that is more desirable *for libzypp* to have a its own, text-only, format.
This in order to avoid dependencies on a library parsing XML, and avoid the work of parsing altogether. As indicated in an earlier mail, it is totally appropriate and sufficient if libzypp just reads the first line of the mirror list and follows that URL.
But if you say, libzypp has XML support anyway (I suppose it would have, in order to be able to use repo-md repositories), metalink support is (nearly) there.
libzypp does have XML support, but it uses libxml2, which is much larger and much slower than libexpat. It's also a bit sad that it's linked to that at all in the meantime, because it is only used for parsing the entry file for repo-md repos (i.e. repomd.xml). None of the other files are parsed by libzypp. And the XML store doesn't exist meanwhile anymore, so also no need for XML on that front. Ciao, Michael. -- To unsubscribe, e-mail: zypp-devel+unsubscribe@opensuse.org For additional commands, e-mail: zypp-devel+help@opensuse.org
2008/3/28, Jan Kupec
The project proposal is interesting, but IMO can't be implemented as proposed now. It needs to be discussed.
No problem. Moreover, for me, there are right now many new concepts to explore and learn (actual architecture, protocols, code, etc ...) so It's complicated for me to make a detailed implementation plan, as required for Google. For this reason, any input, suggestions (or critics :-( ) in my "Student Dashboard" will be appreciated :-) Thanks, Gerard -- To unsubscribe, e-mail: zypp-devel+unsubscribe@opensuse.org For additional commands, e-mail: zypp-devel+help@opensuse.org
On 29/03/2008, Gerard Farràs i Ballabriga
2008/3/28, Jan Kupec
: The project proposal is interesting, but IMO can't be implemented as proposed now. It needs to be discussed.
No problem.
Moreover, for me, there are right now many new concepts to explore and learn (actual architecture, protocols, code, etc ...) so It's complicated for me to make a detailed implementation plan, as required for Google.
For this reason, any input, suggestions (or critics :-( ) in my "Student Dashboard" will be appreciated :-)
Just a note that the deadline is very soon (31st March -- two days) -- so everyone please hurry with comments/suggestions :-) Kind thoughts, -- Francis Giannaros http://francis.giannaros.org -- To unsubscribe, e-mail: zypp-devel+unsubscribe@opensuse.org For additional commands, e-mail: zypp-devel+help@opensuse.org
Hello, On Fri, Mar 28, 2008 at 06:26:59PM +0100, Jan Kupec wrote:
Gerard Farràs i Ballabriga wrote:
Hello :-)
My name is Gerard Farràs, I am a graduate student in Computer Science at the UOC [1], in Barcelona, and I am interested, within the framework of the Google Summer code [2] in developing the project "Concept for libzypp doing failover when downloading packages from download.opensuse.org "[3].
Thanx for the interest!
The project proposal is interesting, but IMO can't be implemented as proposed now. It needs to be discussed.
Just a quick thought. Two things that cross my mind are:
Jan, could I address your concerns to your satisfaction? Do you have further concerns, input, questions, ... ? Peter -- "WARNING: This bug is visible to non-employees. Please be respectful!" SUSE LINUX Products GmbH Research & Development
2008/3/28 Gerard Farràs i Ballabriga
My name is Gerard Farràs, I am a graduate student in Computer Science at the UOC [1], in Barcelona, and I am interested, within the framework of the Google Summer code [2] in developing the project "Concept for libzypp doing failover when downloading packages from download.opensuse.org "[3].
And... the project was accepted [1] ;-) So, I have already begun to explore a little the tools, code, the different classes, etc. ... Any recommendation for me to start? Someone suggests to me a simple exercise :-) ? Thanks! Gerard [1] http://code.google.com/soc/2008/suse/about.html -- To unsubscribe, e-mail: zypp-devel+unsubscribe@opensuse.org For additional commands, e-mail: zypp-devel+help@opensuse.org
Gerard Farràs i Ballabriga wrote:
And... the project was accepted [1] ;-)
So, I have already begun to explore a little the tools, code, the different classes, etc. ...
Any recommendation for me to start? Someone suggests to me a simple exercise :-)
May be compiling/installing libzypp from svn trunk in a prefix, then compiling zypper against this libzypp. Duncan -- To unsubscribe, e-mail: zypp-devel+unsubscribe@opensuse.org For additional commands, e-mail: zypp-devel+help@opensuse.org
2008/4/28, Duncan Mac-Vicar Prett
May be compiling/installing libzypp from svn trunk in a prefix, then compiling zypper against this libzypp.
This step is "already" done ;-) . What next ? Thanks, Gerard -- To unsubscribe, e-mail: zypp-devel+unsubscribe@opensuse.org For additional commands, e-mail: zypp-devel+help@opensuse.org
Gerard Farràs i Ballabriga wrote:
2008/4/28, Duncan Mac-Vicar Prett
: May be compiling/installing libzypp from svn trunk in a prefix, then compiling zypper against this libzypp.
This step is "already" done ;-) . What next ?
Thanks,
Gerard
http://en.opensuse.org/Libzypp/Design (needs some update but most is still valid) http://en.opensuse.org/Libzypp/Programmers_Guide then I would start looking how the different layers interact: in the lowest level you have plain curl. This is wrapped by MediaManager (in media directory), which has different backends, curl is just one of them. Then there is MediaSetAccess on top which adds the ability to manage multiple media sets (CDs) and set checkers when the wrong media is inserted. Then there is Fetcher which allows to queue jobs, and can look if the requested files are already present in some local cache. Then there are the repo/Downloaders which are Fetcher themselves, and they have the specific logic on how to download one type of repository. Then you have RepoMediaAccess that knows more specific information about the repository itself, and sets the correct media verifier with the repository metadata media information. The idea is that the upper layers provide most of the functionality using the lower ones. But the design always allow you to use low level pieces directly (most of the time) Duncan -- To unsubscribe, e-mail: zypp-devel+unsubscribe@opensuse.org For additional commands, e-mail: zypp-devel+help@opensuse.org
participants (7)
-
Benji Weber
-
Dr. Peter Poeml
-
Duncan Mac-Vicar Prett
-
Francis Giannaros
-
Gerard Farràs i Ballabriga
-
Jan Kupec
-
Michael Matz