The 03.02.11 at 15:51, Greg Freemyer wrote:
Is there a loopback driver that can support large compressed filesystems? (i.e. 200 Gig IDE.)
I create compressed CDs that can be decompressed and read on the fly by the kernel. The "mkzftree" utility serves to create the compressed tree.
From this, an image can be created (mkisofs) and burnt on the cd. But, instead, you can also mount the image with the loop: I don't know the maximum size for it.
It is not what you are looking for, but you could investigate it. :-? Also, the ext2 file system could support compressed files and directories: according to "man chattr": |> A file with the `c' attribute set is automatically compressed on |> the disk by the kernel. A read from this file returns uncompressed |> data. A write to this file compresses data before storing them on the |> disk. [...] |> As of Linux 2.2, the `c', 's', and `u' attribute are not honored |> by the kernel filesystem code. These attributes will be implemented in |> a future ext2 fs version. I don't know what is the current status of this, but the page does mention experimental patches. I would like to know more...
Currently, I create .tgz files with thousands of files internally on the backup drive, but when I want to do a restore, it can take a very long time to extract a file. (i.e. hours)
There was a bash script on older SuSE distros named kbackup. It did a backup using cpio, compressing files individually instead of first concatenating, then compressing, as a ".tgz". I think access time would be faster. -- Cheers, Carlos Robinson