Hello, I am enough new in doing backups on Linux. So I am just starting discover the power of tar. At a momet I do backup into tar file (tar -c -gzip /serv1/ > /backup/serv1.tar ), however creted backup file exceeds 4GB size, so taring hangs ... (SuSE 7.2 2.4.4-4GB ). What are solutions ? I found in tar documentation option --multi-volume (Informs tar that it should create or otherwise operate on a multi-volume tar archive), but for me it is says nothing... I would appresiate any advice :) Kestutis
Regarding tar, afio, and Linux backups Hello Kestutis, all-- (I am a Linux newbie and I hope that more experienced users on the SuSE Linux list will correct any misconceptions or mistakes I may make below.) --- I have also just begun to learn about and use tar and afio, and one of the first things I learned is that if one stores an archive as a file in a linux filesystem (ext2, ext3 or reiserfs, for example) those filesystems have a limit as to how large a file can be. On my system, tar and afio failed at the point that the archive exceeded 2 Gigabytes, because 2 GB is the largest file size that was allowed (by the kernel or the filesystem or both, I presume). The solution was to use tar or afio in a different way-- Instead of creating an archive as a file within a filesystem, I used tar or afio to create the archive by writing it directly onto a separate storage medium (not my primary hard drive, but in my case a second hard drive that I have reseved just for backups using tar or afio). A hard disk, floppy disk, zip disk, etc. don't HAVE to have a filesystem on them for tar or afio to write an archive to them. While tar and afio can write an archive to a file within a filesystem, they can also write directly to a storage device, treating the entire storage device as if it were a file. If one uses a storage device for tar or afio to write directly to, then that storage device cannot subsequently be mounted at a mount point on one's root filesystem because it does not at this point have a filesystem (such as ext2, ext3 or reiserfs) on it, but it CAN be read directly by tar or afio and the archive(s) can be extracted from that device by tar or afio and the unpacked files can be placed somewhere within one's mounted filesystems. To give you an example, I have two hard drives. The device files for them are /dev/hda (my primary hard drive, 40GB, with a reiser filesystem created on it and SuSE Linux installed on it), and /dev/hdb, an 8GB drive that is NOT partitioned and does NOT contain a filesystem. BUT it CAN be accessed by the Linux kernel and by tar and afio, and it can be used to store a tar or other archive up to the full size of the device (it's an 8GB hard disk). So, to back up my home directory (for example) I change to my home directory (cd /home/my_user_name), and instead of doing the following: tar -cvf /dev/hdb/myarchive.tar . (the "dot" at the end stands for and means "this directory, and everything in it") I do this instead: tar -cvf /dev/hdb . (notice that instead of a filename, I simply used the device name of my second hard drive, /dev/hdb. Remember that device files are really just special files themselves, and when the kernel writes to those files (like /dev/hdb for instance) it is actually writing directly to a hardware device instead of to a file in a filesystem--it is treating that hardware device as though it were itself a file). In the first case, I told tar to CREATE (-c) an archive, be VERBOSE (-v) about it, and write the archive to a FILE (-f, myarchive.tar on /dev/hdb) and to use the current directory (.) as the source of files and directories to archive. In the second case, I told tar to CREATE an archive, be VERBOSE in doing so, and write the archive directly to the "file" /dev/hdb (which is my second IDE hard drive), and to use the current directory, with all its files, directories, and files in those directories, as the source of data to back up. In this way, my archive can be as large as necessary, up to the full capacity of the disk, without running into filesize limits that may be imposed by any particular filesystem. When using tar (or afio) to create an archive directly on a storage device, rather than as a file within a filesystem on that storage device, that storage device (or the removable medium ON the storage device) cannot be mounted at a mount point on your primary filesystem because it no longer contains a filesystem itself, but is instead treated by the kernel and tar and afio as a single file--the entire hard drive or zip disk or other storage medium is treated as a single file and is accessed by its "file name," which is it's entry in the /dev directory, such as /dev/hdb. Therefore, files cannot be moved and copied to the storage device in the same way that one can move and copy files within and between filesystems. In fact, since one cannot mount a device when it is used this way, one cannot see what data it contains except by using, for example, tar or afio to list its contents, in my case like this: tar -tvf /dev/hdb Beware that using a hardware device directly in this way overwrites any data that may have been contained on that device--it nullifies any filesystem that may have previously been created on that device and overwrites data on that device. So use with care. I hope this helps. As I said at the top of this message, I hope that other more experienced Linux users on this list will add detail to and correct what I have written above. Best wishes, Steve --- --- --- On Tuesday 28 May 2002 04:11, Kestutis Saldziunas wrote:
Hello,
I am enough new in doing backups on Linux. So I am just starting discover the power of tar.
At a momet I do backup into tar file (tar -c -gzip /serv1/ > /backup/serv1.tar ), however creted backup file exceeds 4GB size, so taring hangs ... (SuSE 7.2 2.4.4-4GB ).
What are solutions ? I found in tar documentation option --multi-volume (Informs tar that it should create or otherwise operate on a multi-volume tar archive), but for me it is says nothing...
I would appresiate any advice :)
Kestutis
Steve D wrote:
Regarding tar, afio, and Linux backups
Hello Kestutis, all--
(I am a Linux newbie and I hope that more experienced users on the SuSE Linux list will correct any misconceptions or mistakes I may make below.)
---
I have also just begun to learn about and use tar and afio, and one of the first things I learned is that if one stores an archive as a file in a linux filesystem (ext2, ext3 or reiserfs, for example) those filesystems have a limit as to how large a file can be. On my system, tar and afio failed at the point that the archive exceeded 2 Gigabytes, because 2 GB is the largest file size that was allowed (by the kernel or the filesystem or both, I presume).
You must be using an old kernel and an old tar linked against an old glibc. The 2gb barier was broken many months ago. Mark
participants (3)
-
Kestutis Saldziunas
-
Mark Hounschell
-
Steve D