Hello, Perhaps a stupid question or two. #1 - How does backup an entire installation using dd? #2 - And is it possible to pipe the dd output to an ethernet connection to another Linux box so that 'backup' can be on a drive on another machine? Would NFS work with dd? -Jim-
On Friday 12 September 2003 1:23 pm, Jim Norton wrote:
Hello,
Perhaps a stupid question or two.
#1 - How does backup an entire installation using dd? #2 - And is it possible to pipe the dd output to an ethernet connection to another Linux box so that 'backup' can be on a drive on another machine? Would NFS work with dd?
Jim... How about something like this: # dd if=/dev/hda1 of=/some/backup # dd if=/dev/hda1 ... | ssh user@host dd of=backup The powers and joys of dd are almost limitless. :-) -Nick
* Jim Norton (jrn@oregonhanggliding.com) [030912 11:23]:
#1 - How does backup an entire installation using dd? #2 - And is it possible to pipe the dd output to an ethernet connection to another Linux box so that 'backup' can be on a drive on another machine?
You could pipe the output to netcat which transfers it to the netcat running on remote. You could do something similar with ssh but depending on how fast the processors are you might be limiting the speed of the transfer. E.g., remote: netcat -l -p 10000 > backup-dd local: dd if=/dev/whatever | netcat remote 10000 You'll want to adjust the block size that dd reads. -- -ckm
On Fri, 12 Sep 2003, Jim Norton wrote:
Perhaps a stupid question or two.
(No, not stupid at all IMO.)
#1 - How does backup an entire installation using dd?
I use, e.g.: # cd /huge-filesystem/lots-of-space # dd if=/dev/hda0 of=hda0.ext3.img bs=1024k You can tune the "bs=" parameter for best performance if you desire. I name the backup file with the filesystem type and something to indicate that it is a filesystem image. I tend to do some of my backups nowadays as filesystem image files since it allows me to easily loopback mount them to retrieve a file, etc. BTW, I do my regular backups (ext3, xfs) using the appropriate "dump" program (e.g., the ext[23] /sbin/dump program). The disadvantage of using "dd" for backups is that it backs up the -entire- filesystem, including unallocated space and so will always take up as much space on the backup media as the entire filesystem. Other backup progs, such as "dump", "tar", "cpio", etc., only back up existing files and ignore empty space in the filesystem.
#2 - And is it possible to pipe the dd output to an ethernet connection to another Linux box so that 'backup' can be on a drive on another machine? Would NFS work with dd?
I would use something like: # dd if=/dev/hda0 | ssh user@otherhost 'cd /huge-filesystem/lots-of-space && dd of=hda0.ext3.img' Yes, NFS also works with "dd", i.e., the target filesystem may be NFS mounted from another system. Hope this helps! Phil -- Philip Amadeo Saeli SuSE Linux 8.2 psaeli@zorodyne.com
Friday 12 Sep 2003 at 12:25pm, Philip Amadeo Saeli wrote:
On Fri, 12 Sep 2003, Jim Norton wrote:
Perhaps a stupid question or two.
(No, not stupid at all IMO.)
#1 - How does backup an entire installation using dd?
I use, e.g.:
# cd /huge-filesystem/lots-of-space # dd if=/dev/hda0 of=hda0.ext3.img bs=1024k
I have done this in the past, for example to "freeze" a filesystem image because I wanted to locate and restore a previously deleted file. I ran into the 2G (-1byte) file size limit when I did this, so I was forced to create two files <2G to transfer a ~4G partition. This was with SuSE 8.0-- do 8.1 and/or 8.2 now have large file support for all utilities so this problem goes away? Jim Cunning
* Jim Cunning (jcunning@cts.com) [030912 13:00]:
I ran into the 2G (-1byte) file size limit when I did this, so I was forced to create two files <2G to transfer a ~4G partition. This was with SuSE 8.0-- do 8.1 and/or 8.2 now have large file support for all utilities so this problem goes away?
SuSE has had LFS since 7.2. -- -ckm
Friday 12 Sep 03 at 1:25pm, Christopher Mahmood wrote:
* Jim Cunning (jcunning@cts.com) [030912 13:00]:
I ran into the 2G (-1byte) file size limit when I did this, so I was forced to create two files <2G to transfer a ~4G partition. This was with SuSE 8.0-- do 8.1 and/or 8.2 now have large file support for all utilities so this problem goes away?
SuSE has had LFS since 7.2.
That may be, but .... As I recall, there is a different API for opening, or writing to, or seeking in (I don't remember which) files that are greater than 2GB. Some utilities and other programs did not seem use the API supporting large files, and failed when file sizes reached (2**31)-1. I have forgotten many of the specifics, but I _do_ remember that I had problems copying (with cp) a 3.1GB file from an NT server using a SMBFS mount, but I was able to transfer the file with smbclient using 'get'. Using cp, the transfer always failed at 2,147,483,647 bytes. I also had the same problem with NFS transfers between SuSE 8.0 systems when the files were >2G. I just ran an experiment with dd to verify that it can generate large (>2G) files _locally_ with "dd if=/dev/urandom of=bigfile bs=16k count=131073" and that cp can copy such a file locally. Is the problem only with NFS and smbfs? Jim
* Jim Cunning (jcunning@cts.com) [030912 15:08]:
I just ran an experiment with dd to verify that it can generate large (>2G) files _locally_ with "dd if=/dev/urandom of=bigfile bs=16k count=131073" and that cp can copy such a file locally. Is the problem only with NFS and smbfs?
nfs v.2 only supports files < 2G, that would the problem. v.3 supports up to 8EiB. I have no idea about smbfs. -- -ckm
participants (5)
-
Christopher Mahmood
-
Jim Cunning
-
jrn@oregonhanggliding.com
-
Nick LeRoy
-
Philip Amadeo Saeli