[SLE] using dump and storing on CD's
I'd like to do a level 0 backup of my system using dump, but I don't have tapes and was thinking in storing the backup in CD's. A full backup will not fit in a single CD, but I have a second HD where I can temporarily store the output from dump, so I thought of doing the following: dump -0u -B 600000 -f /the-second-hard-drive/file-name /dev/the-system this will start dump and stop once the 600000 limit is reached, asking whether the second volume is ready; at this time, I change the name of the just generated volume (say, make it file-name-a), and say yes. I continue this way. Once done, I copy each of the "file-name-a", "file-name-b", etc, to a separate CD. I know I can execute the dump command, and do the name changing thing. But I wonder if I will be able to restore the files. Is there a better way to do this? (I've been looking at tar, cpio, zip, but I think dump might be the way to go). Thanks, Ramon -- To unsubscribe send e-mail to suse-linux-e-unsubscribe@suse.com For additional commands send e-mail to suse-linux-e-help@suse.com Also check the FAQ at http://www.suse.com/Support/Doku/FAQ/
On Wed, 29 Dec 1999, Ramon Diaz-Uriarte wrote:
dump -0u -B 600000 -f /the-second-hard-drive/file-name /dev/the-system
This is *by far* the most intelligent method of backing up a system I've seen in a long time. One could simply take this file, include an ElTorito image of LOAF or a similar single-floppy Linux and restore directly from the bootable CD. Unfortunately, Ramon, I only have more questions and no answers :(. I wonder about the limits of this method. I've never seen a CD-ROM filled `to the gills' with data, and therfor have nothing to study. What is the size limit on a standard 640MB CD-R or a 700MB for that matter? In other words, if you only have a single file, the allocation table, and another file (=< 1440k) on the disk, how big can that large file be? Simple algebra, I know, but *where* does one start to find information on the ISO9660 file system? According to the CD-R HOWTO, 620MB is an acceptable size, but those of you who know me know that I like to push the limits sometimes :]. Another question that I have is whether or not there is a way to compress the data that dump processes/generates. Using file limits in the dump command itself (above) is pointless if you'll simply compress the file afterwards. I once used dd to read a partition, and then compressed the image with bzip2. It was a 2gb partition with ~600MB of data on it. The process from start to finish (dd; bzip) took around 2 hours. Using dd, the resulting image was exactly 2 gigs (I assumed that much), but I was shocked to see that the *compressed* file was well over 1 gig in size! I thought that surely bzip2 was more intelligent than that, but I was wrong. Using dump makes a lot more sense, to say the least. -- -=|JP|=- Jon Pennington | Atipa Linux Solutions -o) jpennington@atipa.com | http://www.atipa.com /\\ Kansas City, MO, USA | 816-241-2641 _\_V -- To unsubscribe send e-mail to suse-linux-e-unsubscribe@suse.com For additional commands send e-mail to suse-linux-e-help@suse.com Also check the FAQ at http://www.suse.com/Support/Doku/FAQ/
Hi, On Wed, Dec 29, 1999 at 16:46 -0600, Jon Pennington wrote:
Another question that I have is whether or not there is a way to compress the data that dump processes/generates. Using file limits in the dump command itself (above) is pointless if you'll simply compress the file afterwards.
You could use afio instead of dump for this. It can compress every single file before putting it into an archive and it supports multivolume archives. A simple approach to create such an archive on floppy disks could be: mount /mnt/floppy find /home |afio -o -Z -s 1m -H script1 /mnt/floppy/test.afio.gz Now afio will build an archive with 1 MB volumes. After creating each volume it will execute script1 which could look like this ---------------------------------------------------> #!/bin/sh umount /mnt/floppy || exit 1 echo "Insert floppy No $1, press any key when ready" read mount /mnt/floppy || exit 1 rm -f /mnt/floppy/* exit 0 <-------------------------------------------------- To restore the archive insert the first floppy and do a mount /mnt/floppy afio -i -Z -s 0 -H script2 /mnt/floppy/test.afio.gz cript2 could be something like ---------------------------------------------------> #!/bin/sh umount /mnt/floppy || exit 1 echo "Insert floppy No $1, press any key when ready" read mount /mnt/floppy || exit 1 exit 0 <--------------------------------------------------- It shouldn't be hard to change the shell scripts so that they burn each volume on CD. Ciao, Stefan -- To unsubscribe send e-mail to suse-linux-e-unsubscribe@suse.com For additional commands send e-mail to suse-linux-e-help@suse.com Also check the FAQ at http://www.suse.com/Support/Doku/FAQ/
Thanks a lot for your answer! A minor problem with this strategy, though, is that I do not have my CD-burner working under Linux yet; in the meantime, I'll be using windows to burn them; so actually, what I'd do is create the several dump files, then move them to a win partition, reboot in win, and burn the CD's. It is a pain, but I don't think I can just direct the output to my CD yet. Ramon On Thu, 30 Dec 1999, you wrote:
Hi,
On Wed, Dec 29, 1999 at 16:46 -0600, Jon Pennington wrote:
Another question that I have is whether or not there is a way to compress the data that dump processes/generates. Using file limits in the dump command itself (above) is pointless if you'll simply compress the file afterwards.
You could use afio instead of dump for this. It can compress every single file before putting it into an archive and it supports multivolume archives. A simple approach to create such an archive on floppy disks could be:
mount /mnt/floppy find /home |afio -o -Z -s 1m -H script1 /mnt/floppy/test.afio.gz
Now afio will build an archive with 1 MB volumes. After creating each volume it will execute script1 which could look like this
---------------------------------------------------> #!/bin/sh umount /mnt/floppy || exit 1 echo "Insert floppy No $1, press any key when ready" read mount /mnt/floppy || exit 1 rm -f /mnt/floppy/* exit 0 <--------------------------------------------------
To restore the archive insert the first floppy and do a
mount /mnt/floppy afio -i -Z -s 0 -H script2 /mnt/floppy/test.afio.gz
script2 could be something like
---------------------------------------------------> #!/bin/sh umount /mnt/floppy || exit 1 echo "Insert floppy No $1, press any key when ready" read mount /mnt/floppy || exit 1 exit 0 <---------------------------------------------------
It shouldn't be hard to change the shell scripts so that they burn each volume on CD.
Ciao, Stefan
-- To unsubscribe send e-mail to suse-linux-e-unsubscribe@suse.com For additional commands send e-mail to suse-linux-e-help@suse.com Also check the FAQ at http://www.suse.com/Support/Doku/FAQ/
-- To unsubscribe send e-mail to suse-linux-e-unsubscribe@suse.com For additional commands send e-mail to suse-linux-e-help@suse.com Also check the FAQ at http://www.suse.com/Support/Doku/FAQ/
Hi, On Thu, Dec 30, 1999 at 23:30 +0100, Ramon Diaz-Uriarte wrote:
A minor problem with this strategy, though, is that I do not have my CD-burner working under Linux yet; in the meantime, I'll be using windows to burn them; so actually, what I'd do is create the several dump files, then move them to a win partition, reboot in win, and burn the CD's. It is a pain, but I don't think I can just direct the output to my CD yet.
Then use a control script for afio like this one: -----------------------------------> #!/bin/sh # $2 is the name of the archive file # $1 is the volume number mv $2 $2.$(($1-1)) <----------------------------------- find / |afio -o -Z -G 9 -s 620m -H script test.afio.gz will then create volumes with a size of 620 MB each. The volumes will be numbered test.afio.gz.1 test.afio.gz.2, ..., test.afio.gz. Ciao, Stefan -- To unsubscribe send e-mail to suse-linux-e-unsubscribe@suse.com For additional commands send e-mail to suse-linux-e-help@suse.com Also check the FAQ at http://www.suse.com/Support/Doku/FAQ/
Thank you very much for your answer.
dump -0u -B 600000 -f /the-second-hard-drive/file-name /dev/the-system
This is *by far* the most intelligent method of backing up a system I've seen in a long time. One could simply take this file, include an ElTorito image of LOAF or a similar single-floppy Linux and restore directly from the bootable CD.
Actually I do not have my CD-burner working under Linux yet; in the meantime, I'll be using windows to burn them. Thus, what I'd do is create the dump files, then move them to a win partition, reboot in win, and burn the CD's (copying each file from dump into a CD). To restore, I'd mount the CD and type something like restore -t -f /dev/cdrom/my_dump
Unfortunately, Ramon, I only have more questions and no answers :(. I wonder about the limits of this method. I've never seen a CD-ROM filled `to the gills' with data, and therfor have nothing to study. What is the size limit on a standard 640MB CD-R or a 700MB for that matter? In other words, if you only have a single file, the allocation table, and another file (=< 1440k) on the disk, how big can that large file be? Simple algebra, I know, but *where* does one start to find information on the ISO9660 file system? According to the CD-R HOWTO, 620MB is an acceptable size, but those of you who know me know that I like to push the limits sometimes :].
I really can't answer any of the questions. I was trying to be safe, keeping the files < 600 MB, to make sure everything works OK.
Another question that I have is whether or not there is a way to compress the data that dump processes/generates. Using file limits in the dump command itself (above) is pointless if you'll simply compress the file afterwards.
I think an easy way to compress the dump is to use dump -9uf - /dev/to_be_backed_up | gzip > /dev/the_device o the dump is sent to stdout and piped to gzip before being written. However, given the cumbersome process I'll follow, I think it'll be safer not to compress anything.
I once used dd to read a partition, and then compressed the image with bzip2. It was a 2gb partition with ~600MB of data on it. The process from start to finish (dd; bzip) took around 2 hours. Using dd, the resulting image was exactly 2 gigs (I assumed that much), but I was shocked to see that the *compressed* file was well over 1 gig in size! I thought that surely bzip2 was more intelligent than that, but I was wrong. Using dump makes a lot more sense, to say the least.
-- -=|JP|=- Jon Pennington | Atipa Linux Solutions -o) jpennington@atipa.com | http://www.atipa.com /\\ Kansas City, MO, USA | 816-241-2641 _\_V
-- To unsubscribe send e-mail to suse-linux-e-unsubscribe@suse.com For additional commands send e-mail to suse-linux-e-help@suse.com Also check the FAQ at http://www.suse.com/Support/Doku/FAQ/
-- To unsubscribe send e-mail to suse-linux-e-unsubscribe@suse.com For additional commands send e-mail to suse-linux-e-help@suse.com Also check the FAQ at http://www.suse.com/Support/Doku/FAQ/
participants (3)
-
jpennington@atipa.com
-
rdiazuri@students.wisc.edu
-
sttr@sttr.de