Tim,
I have added the Indexes directive to files.opensuse.org. You should be able to go in there and file the uploaded files for all the wikis. More importantly, you can now get all the files together with a single wget command if that will help you out in your quest. Something like the following should work:
wget -r -np -nd
?http://files.opensuse.org/opensuse/old-en/?
This will download every file that has been uploaded to the old-en wiki and place it under a single directory. I believe all the filenames should be different, and I believe it is set to no-clobber by default, so this should be a safe command to run. You may also want to consider adding a little wait time between requests (with a -w), so that you don't destroy your bandwidth on this.
Also, if the idea was for multiple people to be able to do something like this, I can even do something like create a temporary location on files.o.o with all the old-en files in one directory.
-Matt
>>> "Matthew Ehle" <mehle@novell.com> 07/15/10 12:30 AM >>>
>Tim,
>
>How are you determining which files you want to get? If it makes life easier for you, I can just allow index view on >files.opensuse.org, which I should probably do anyway. With that, you can download all the files on old-en.opensuse.org >into one directory with a single wget command. That would probably make it a lot easier for you to pick and choose the >files you need.
>I will look into getting one of these upload extensions installed as soon as I get a handle on this session error bug that is >affecting imports in the first place. Since it is possible, however unlikely, that one of the new extensions that we installed is >related to the problem, I am wary of installing more extensions until we at least know what is going on.
>
>In any case, I'm going to go ahead and set index views on files.o.o before I go to bed and forget all about it.
>
>-Matt
>>> Christian Boltz 07/14/10 4:35 PM >>>
>Hello,
>
>on Mittwoch, 14. Juli 2010, Tim Mohlmann wrote:
>> I'm still try to work out a script for downloading multiple images at
>> once, to overcome the sub-directory problem. Tried earlier tips form
>> Rajko and Cristian, but it's not working as it should (wget is
>> following a little bit to much).
>
>to get the directory structure:
>
>echo 'Image:foo.jpg' | sed 's/^Image://' | md5sum - | sed 's§^\(.\)\(.\).*§\1/\1\2§'
>
>
>Script to download several images (not really tested, but should work):
>
>grep ".." imglist.txt | while read file ; do
>img=$(echo $file | sed 's/^Image://')
>imgpath=$(echo "$img" | md5sum - | sed 's§^\(.\)\(.\).*§\1/\1\2§')
>wget "http://old-en.opensuse.org/images/$imgpath/$img"
>done
>
>This will download all files listed in imglist.txt with wget.
>imglist could look like that:
>
>Image:Foo.jpg
>Bar.jpg <---- Image: is not required (will be cut away anyway)
>Image:Baz.jpg
>
>I assume you know what you are writing into imglist.txt - the script
>does not have a real protection against funny filenames etc.
>Most important: you have to use _ instead of spaces (or add another sed ;-)
>
>
>If you prefer to use commandline parameters instead of a file with the
>filelist, replace the first line with
>for file in "$@" ; do
>
>
>Regards,
>
>Christian Boltz
>
>PS: In case you wonder: the "grep .." is a trick to skip empty lines ;-)