[opensuse] Reliable way to backup hard drive before clean install
I am planning on a clean install (going from 13.1 to 42.2 Beta) on the root partition, leaving the home partition alone. I'm not doing complete backups because of limited space on the backup drive. I've room for a single full backup, both partitions. What is a reliable way? "cp -a / /media/LinuxBackup/full" is what occurs to me. Is there a better way? TIA, Jeffrey -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Le 03/10/2016 à 18:33, Jeffrey L. Taylor a écrit :
I am planning on a clean install (going from 13.1 to 42.2 Beta) on the root partition, leaving the home partition alone. I'm not doing complete backups because of limited space on the backup drive. I've room for a single full backup, both partitions. What is a reliable way?
"cp -a / /media/LinuxBackup/full" is what occurs to me. Is there a better way?
TIA, Jeffrey
It's not an easy thing. for data the problem is to know where is the data located. It's not all in /home. some is in /var (data base, mail...), some in /etc (config). Don't forget also the "dot" (.) invisible files. cache files may be big and are unnecessary. Iso (dvd) files are also big and may not be useful in a backup. Temporary files also. For system, the problem is open files and virtual files. an open (always modified) file can't be copied, or not in usable form. virtual files (/proc, sys...) don't need to be copied and I'm sure I forget things :-( some notes: http://dodin.info/wiki/pmwiki.php?n=Doc.CompleteBackup better if possible backup a stopped machine (that is with live disk or other linux on the same machine) but anyway, any backup is better than nothing :-) jdd -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Jeffrey L. Taylor wrote:
I am planning on a clean install (going from 13.1 to 42.2 Beta) on the root partition, leaving the home partition alone. I'm not doing complete backups because of limited space on the backup drive. I've room for a single full backup, both partitions. What is a reliable way?
"cp -a / /media/LinuxBackup/full" is what occurs to me. Is there a better way?
tar czvf /media/LinuxBackup/full.tar.gz <all but mount points> Or use rsync. -- Per Jessen, Zürich (15.5°C) http://www.dns24.ch/ - your free DNS host, made in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Per Jessen wrote:
Jeffrey L. Taylor wrote:
I am planning on a clean install (going from 13.1 to 42.2 Beta) on the root partition, leaving the home partition alone. I'm not doing complete backups because of limited space on the backup drive. I've room for a single full backup, both partitions. What is a reliable way?
"cp -a / /media/LinuxBackup/full" is what occurs to me. Is there a better way?
tar czvf /media/LinuxBackup/full.tar.gz <all but mount points>
42.2 is looking very solid at the moment. If I were you, I would backup /home, /var, /etc and /usr/local, nuke the rest, then reinstall. -- Per Jessen, Zürich (15.2°C) http://www.hostsuisse.com/ - virtual servers, made in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 10/03/2016 09:33 AM, Jeffrey L. Taylor wrote:
I am planning on a clean install (going from 13.1 to 42.2 Beta) on the root partition, leaving the home partition alone. I'm not doing complete backups because of limited space on the backup drive. I've room for a single full backup, both partitions. What is a reliable way?
"cp -a / /media/LinuxBackup/full" is what occurs to me. Is there a better way?
TIA, Jeffrey
That's a big jump, so you are wise to choose a clean install. Further, you MAY have to repartition if you had too small of a Root Partition in 13.1. But assuming no repartitioning problems, I would simply backup the /home partition and selected configurations from the /etc directory if you run your own mail server, web server media server, and things like Cups etc. Maybe copy all of /etc so some removeable media or something. If the budget will stand it, a spare/new hard drive in a usb disk caddy is nice. It will serve well in the future for backups as well. As for what tools to use? That could cause a huge chain of competing recommendations. I'd recommend what ever you are familiar with. I still use a package from a long time ago that allows me to stack compressed backups onto a NAS or a Tape or an external hard drive. (Its non-free so I won't mention it here, there are free equivalences). The Real Good news is that if you do NOT have to repartition, and /home is already a separate partition, your prior planning for this eventuality is likely to make the whole operation quite painless. Be careful during the install to not let it format /home, but to nuke and reformat the other partitions. When adding users to the system, add them in the same order so that their user id numbers match what is already in /home directories. Get it up and running before you copy over any config files from /etc. Then do one such copy-over at a time. -- After all is said and done, more is said than done. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Le 03/10/2016 à 19:48, John Andersen a écrit :
On 10/03/2016 09:33 AM, Jeffrey L. Taylor wrote:
I am planning on a clean install (going from 13.1 to 42.2 Beta) on the root partition, leaving the home partition alone.
I did not notice the jump at first read. It don't change much the work, but I would backup all the "." (dot) files in /home/user, and *remove them* before installing, the old config may not be good for the new system jdd -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 2016-10-03 18:33, Jeffrey L. Taylor wrote:
I am planning on a clean install (going from 13.1 to 42.2 Beta) on the root partition, leaving the home partition alone. I'm not doing complete backups because of limited space on the backup drive. I've room for a single full backup, both partitions. What is a reliable way?
"cp -a / /media/LinuxBackup/full" is what occurs to me. Is there a better way?
Two main methods: 1) image full disk backup, good for fast restore from scratch. I would perhaps use clonezilla for this. 2) file by file backup. I would use rsync for this. Or, if the device is xfs, there are specific xfs backup/restore tools. I prefer to use 1 and 2 for the system, and 2 for data. Yes, both. The paranoid in me wins. I would not use tar.gz archives. I do not trust them. Reason: a few (or one) byte error makes the tar unrecoverable. I would prefer a method like rar, that stores recovery data that allow to recover the archive in case of media errors - but rar does not support all linux filesystem attributes, so it is out. -- Cheers / Saludos, Carlos E. R. (from 13.1 x86_64 "Bottle" at Telcontar)
On 10/03/2016 11:59 AM, Carlos E. R. wrote:
Reason: a few (or one) byte error makes the tar unrecoverable.
In reality, Carlos, exactly how many of these have failed for you? I'm probably a newbie here, since I've only been running some form of Suse since Suse 5.x back in the day when you could send email directly to Hubert Mantel and get a polite answer back. In those years, I've had a grand total of ZERO tar.gz archives fail. -- After all is said and done, more is said than done.
Le 03/10/2016 à 21:10, John Andersen a écrit :
In those years, I've had a grand total of ZERO tar.gz archives fail.
I had some, mostly due to file transfer or hardware problems, but no file in it could be read jdd -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 2016-10-03 22:28, jdd wrote:
Le 03/10/2016 à 21:10, John Andersen a écrit :
In those years, I've had a grand total of ZERO tar.gz archives fail.
I had some, mostly due to file transfer or hardware problems, but no file in it could be read
Exactly. That's the problem. -- Cheers / Saludos, Carlos E. R. (from 13.1 x86_64 "Bottle" at Telcontar)
jdd wrote:
Le 03/10/2016 à 21:10, John Andersen a écrit :
In those years, I've had a grand total of ZERO tar.gz archives fail.
I had some, mostly due to file transfer or hardware problems, but no file in it could be read
Second that. Had the same problem after an apparently failed SFTP transfer. But there was no error message. The copy seemed valid. If I hadn't packed the files, only single files probably were affected. - Chris -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 2016-10-16 22:58, Chris wrote:
jdd wrote:
Le 03/10/2016 à 21:10, John Andersen a écrit :
In those years, I've had a grand total of ZERO tar.gz archives fail.
I had some, mostly due to file transfer or hardware problems, but no file in it could be read
Second that. Had the same problem after an apparently failed SFTP transfer. But there was no error message. The copy seemed valid. If I hadn't packed the files, only single files probably were affected.
To avoid the problem you have to compress the files individually, so that an error in one doesn't affect the rest (first compress, then tar). Or use a compressor that includes error recovery capabilities, such as rar. -- Cheers / Saludos, Carlos E. R. (from 13.1 x86_64 "Bottle" at Telcontar)
Chris -- ...and then Chris said... % % Second that. Had the same problem after an apparently failed SFTP % transfer. But there was no error message. The copy seemed valid. If I % hadn't packed the files, only single files probably were affected. Just curious... Did you run a checksum on both ends after the copy? In my experience, I can write things perfectly happily but then trip over some write or read error that shows up in a checksum test. Even for local copies, if it's important I always diff the src and dst to ensure that I can read what I wrote before wiping the original I read. HAND :-D -- David T-G See http://justpickone.org/davidtg/email/ See http://justpickone.org/davidtg/tofu.txt -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 2016-10-17 11:53, David T-G wrote:
Chris --
...and then Chris said... % % Second that. Had the same problem after an apparently failed SFTP % transfer. But there was no error message. The copy seemed valid. If I % hadn't packed the files, only single files probably were affected.
Just curious... Did you run a checksum on both ends after the copy? In my experience, I can write things perfectly happily but then trip over some write or read error that shows up in a checksum test. Even for local copies, if it's important I always diff the src and dst to ensure that I can read what I wrote before wiping the original I read.
If the source is static and doesn't change, yes, that's a possibility. But I often do backups while the system is live. -- Cheers / Saludos, Carlos E. R. (from 13.1 x86_64 "Bottle" at Telcontar)
Carlos, et al -- ...and then Carlos E. R. said... % % On 2016-10-17 11:53, David T-G wrote: % > ... % > Just curious... Did you run a checksum on both ends after the copy? ... % % If the source is static and doesn't change, yes, that's a possibility. % % But I often do backups while the system is live. Well, yeah, that could make things more challenging. Of course, then you don't really know if you got a good copy at all, right? So snapshotting a mirror or breaking and then re-syncing would probably be a good approach there. All of this presumes, of course, that one is worried about integrity, which should be pretty simple in this day and age, but I have to admit that I spend more time than I should arguing with my spinning media :-( HAND :-D -- David T-G See http://justpickone.org/davidtg/email/ See http://justpickone.org/davidtg/tofu.txt -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 2016-10-17 13:57, David T-G wrote:
Carlos, et al --
...and then Carlos E. R. said... % % On 2016-10-17 11:53, David T-G wrote: % > ... % > Just curious... Did you run a checksum on both ends after the copy? ... % % If the source is static and doesn't change, yes, that's a possibility. % % But I often do backups while the system is live.
Well, yeah, that could make things more challenging. Of course, then you don't really know if you got a good copy at all, right? So snapshotting a mirror or breaking and then re-syncing would probably be a good approach there.
All of this presumes, of course, that one is worried about integrity, which should be pretty simple in this day and age, but I have to admit that I spend more time than I should arguing with my spinning media :-(
Sometimes I do a backup of the offline system, but that means that I have to boot from something else, then do the backup. This is an operation that takes hours. After that, I do more frequent backups (incremental, with rsync), while online. Yes, of course, some files will be in transient state, but all work files will be in their saved states. -- Cheers / Saludos, Carlos E. R. (from 13.1 x86_64 "Bottle" at Telcontar)
On 2016-10-17 15:27, jdd wrote:
Le 17/10/2016 à 12:44, Carlos E. R. a écrit :
But I often do backups while the system is live.
but then changing files are probably not usable anyway
Most of them are temporary files used by the desktop and not that important. All data files are copied as the saved files that exist on disk. If you worry about a file that is changing at the same time as the backup is done, then do it twice. In MsDOS you can try to acquire a lock on the file before backing it up. If the file is in use, it will fail, so the backup either waits or continues with another file, and try the locked file later. At the end, it can produce a list of failed-locked files. I don't know if that is possible in Linux. -- Cheers / Saludos, Carlos E. R. (from 13.1 x86_64 "Bottle" at Telcontar)
Le 17/10/2016 à 16:42, Carlos E. R. a écrit :
On 2016-10-17 15:27, jdd wrote:
but then changing files are probably not usable anyway
Most of them are temporary files used by the desktop and not that important. All data files are copied as the saved files that exist on disk. If you worry about a file that is changing at the same time as the backup is done, then do it twice.
most important is database. one should backup the database (as text) before any rsync, but who do? my own database is pretty rarely written (and only by me, most never at the time I do the cron jobbed backup :-), so it's not a problem, but else the database can't be recovered jdd -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Mon, Oct 17, 2016 at 10:42 AM, Carlos E. R. <robin.listas@telefonica.net> wrote:
On 2016-10-17 15:27, jdd wrote:
Le 17/10/2016 à 12:44, Carlos E. R. a écrit :
But I often do backups while the system is live.
but then changing files are probably not usable anyway
Most of them are temporary files used by the desktop and not that important. All data files are copied as the saved files that exist on disk. If you worry about a file that is changing at the same time as the backup is done, then do it twice.
In MsDOS you can try to acquire a lock on the file before backing it up. If the file is in use, it will fail, so the backup either waits or continues with another file, and try the locked file later. At the end, it can produce a list of failed-locked files.
I don't know if that is possible in Linux.
To my knowledge all file level locking is voluntary. ie. user space apps have to intentionally respect other programs locking a file. So programs that are designed to work together can leverage file locking, but a backup program can't implement it with the cooperation of every other program on the system. Greg -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 2016-10-17 17:50, Greg Freemyer wrote:
On Mon, Oct 17, 2016 at 10:42 AM, Carlos E. R. <robin.listas@telefonica.net> wrote:
In MsDOS you can try to acquire a lock on the file before backing it up. If the file is in use, it will fail, so the backup either waits or continues with another file, and try the locked file later. At the end, it can produce a list of failed-locked files.
I don't know if that is possible in Linux.
To my knowledge all file level locking is voluntary. ie. user space apps have to intentionally respect other programs locking a file.
No, there are also kernel locks. I read something about mandatory locking, but I don't know how good it is. -- Cheers / Saludos, Carlos E. R. (from 13.1 x86_64 "Bottle" at Telcontar)
18.10.2016 00:21, Carlos E. R. пишет:
On 2016-10-17 17:50, Greg Freemyer wrote:
On Mon, Oct 17, 2016 at 10:42 AM, Carlos E. R. <robin.listas@telefonica.net> wrote:
In MsDOS you can try to acquire a lock on the file before backing it up. If the file is in use, it will fail, so the backup either waits or continues with another file, and try the locked file later. At the end, it can produce a list of failed-locked files.
I don't know if that is possible in Linux.
To my knowledge all file level locking is voluntary. ie. user space apps have to intentionally respect other programs locking a file.
No, there are also kernel locks.
Advisory locks are also kernel locks.
I read something about mandatory locking, but I don't know how good it is.
File needs to be explicitly marked for allowing mandatory locking so it won't work in general case. See https://www.kernel.org/doc/Documentation/filesystems/mandatory-locking.txt.
On 2016-10-18 05:33, Andrei Borzenkov wrote:
18.10.2016 00:21, Carlos E. R. пишет:
On 2016-10-17 17:50, Greg Freemyer wrote:
On Mon, Oct 17, 2016 at 10:42 AM, Carlos E. R. <> wrote:
To my knowledge all file level locking is voluntary. ie. user space apps have to intentionally respect other programs locking a file.
No, there are also kernel locks.
Advisory locks are also kernel locks.
I read something about mandatory locking, but I don't know how good it is.
File needs to be explicitly marked for allowing mandatory locking so it won't work in general case. See https://www.kernel.org/doc/Documentation/filesystems/mandatory-locking.txt.
Oh. Pity. -- Cheers / Saludos, Carlos E. R. (from 13.1 x86_64 "Bottle" at Telcontar)
On Mon, Oct 17, 2016 at 9:27 AM, jdd <jdd@dodin.org> wrote:
Le 17/10/2016 à 12:44, Carlos E. R. a écrit :
But I often do backups while the system is live.
but then changing files are probably not usable anyway
I didn't pay much attention to this thread, but LVM snapshots are a great way to handle changing files. You make a snapshot, then mount the snapshot read-only and make your backup from there. Greg -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 2016-10-17 23:25, Greg Freemyer wrote:
On Mon, Oct 17, 2016 at 9:27 AM, jdd <jdd@dodin.org> wrote:
Le 17/10/2016 à 12:44, Carlos E. R. a écrit :
But I often do backups while the system is live.
but then changing files are probably not usable anyway
I didn't pay much attention to this thread, but LVM snapshots are a great way to handle changing files.
You make a snapshot, then mount the snapshot read-only and make your backup from there.
XFS volumes also have a way to do it. I have not used the method myself, but I read about it on some man page. -- Cheers / Saludos, Carlos E. R. (from 13.1 x86_64 "Bottle" at Telcontar)
John Andersen wrote:
On 10/03/2016 11:59 AM, Carlos E. R. wrote:
Reason: a few (or one) byte error makes the tar unrecoverable.
In reality, Carlos, exactly how many of these have failed for you?
Yes, I have to wonder that too.
I'm probably a newbie here, since I've only been running some form of Suse since Suse 5.x back in the day when you could send email directly to Hubert Mantel and get a polite answer back.
In those years, I've had a grand total of ZERO tar.gz archives fail.
Ditto. -- Per Jessen, Zürich (8.8°C) http://www.dns24.ch/ - your free DNS host, made in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 2016-10-04 08:14, Per Jessen wrote:
John Andersen wrote:
On 10/03/2016 11:59 AM, Carlos E. R. wrote:
Reason: a few (or one) byte error makes the tar unrecoverable.
In reality, Carlos, exactly how many of these have failed for you?
Yes, I have to wonder that too.
It is rare, but it is documented. Media failures are possible, and if one happens in the middle of a tar.gz you lose it completely.
In those years, I've had a grand total of ZERO tar.gz archives fail.
Ditto.
How many in floppies? -- Cheers / Saludos, Carlos E. R. (from 13.1 x86_64 "Bottle" at Telcontar)
Carlos E. R. wrote:
On 2016-10-04 08:14, Per Jessen wrote:
John Andersen wrote:
On 10/03/2016 11:59 AM, Carlos E. R. wrote:
Reason: a few (or one) byte error makes the tar unrecoverable.
In reality, Carlos, exactly how many of these have failed for you?
Yes, I have to wonder that too.
It is rare, but it is documented. Media failures are possible, and if one happens in the middle of a tar.gz you lose it completely.
Sure, but every archiving tool is vulnerable to media and network failures, tar no more so than any other.
In those years, I've had a grand total of ZERO tar.gz archives fail.
Ditto.
How many in floppies?
Floppies?? -- Per Jessen, Zürich (12.6°C) http://www.cloudsuisse.com/ - your owncloud, hosted in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 2016-10-04 11:47, Per Jessen wrote:
Carlos E. R. wrote:
On 2016-10-04 08:14, Per Jessen wrote:
John Andersen wrote:
On 10/03/2016 11:59 AM, Carlos E. R. wrote:
Reason: a few (or one) byte error makes the tar unrecoverable.
In reality, Carlos, exactly how many of these have failed for you?
Yes, I have to wonder that too.
It is rare, but it is documented. Media failures are possible, and if one happens in the middle of a tar.gz you lose it completely.
Sure, but every archiving tool is vulnerable to media and network failures, tar no more so than any other.
Not tar. The problem is gz.
In those years, I've had a grand total of ZERO tar.gz archives fail.
Ditto.
How many in floppies?
Floppies??
LOL. Yes, the worst media I can think of where I stored archives some time ago, and where media error in later years were frequent. Hey, this desktop machine has a floppy drive. I haven't managed to make it work since I built it up, dunno if hardware problem or kernel problem. A single error, and the whole archive is lost. On the other hand, I still have backups made on a hundred floppies in working order. Made with pctools backup in the 80's. One has one error, but the restore program successfully regenerates the data with forward error recovery methods. -- Cheers / Saludos, Carlos E. R. (from 13.1 x86_64 "Bottle" at Telcontar)
Carlos E. R. wrote:
On 2016-10-04 11:47, Per Jessen wrote:
Carlos E. R. wrote:
On 2016-10-04 08:14, Per Jessen wrote:
John Andersen wrote:
On 10/03/2016 11:59 AM, Carlos E. R. wrote:
Reason: a few (or one) byte error makes the tar unrecoverable.
In reality, Carlos, exactly how many of these have failed for you?
Yes, I have to wonder that too.
It is rare, but it is documented. Media failures are possible, and if one happens in the middle of a tar.gz you lose it completely.
Sure, but every archiving tool is vulnerable to media and network failures, tar no more so than any other.
Not tar. The problem is gz.
Oh? Well, then use another compressor. xz is good.
In those years, I've had a grand total of ZERO tar.gz archives fail.
Ditto.
How many in floppies?
Floppies??
LOL. Yes, the worst media I can think of where I stored archives some time ago, and where media error in later years were frequent.
Yes, that is true. Well, floppies were never meant for long-term storage, although I'm sure I still have some 5" up in the attic. -- Per Jessen, Zürich (14.3°C) http://www.dns24.ch/ - your free DNS host, made in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 2016-10-04 14:03, Per Jessen wrote:
Carlos E. R. wrote:
Not tar. The problem is gz.
Oh? Well, then use another compressor. xz is good.
I made a review of several compressors, none is good enough. Almost none have error recovery.
Yes, that is true. Well, floppies were never meant for long-term storage, although I'm sure I still have some 5" up in the attic.
The original floppies sold in the 80's do last. Those made by the end of the 90's and later are terrible. Impossible to get one that lasts a month without an error. A box of ten has some that error out on format. -- Cheers / Saludos, Carlos E. R. (from 13.1 x86_64 "Bottle" at Telcontar)
Le 04/10/2016 à 14:39, Carlos E. R. a écrit :
I made a review of several compressors, none is good enough. Almost none have error recovery.
they should have: http://www.7-zip.org/recover.html at least the files out of the damaged part should be recoverable
Yes, that is true. Well, floppies were never meant for long-term storage, although I'm sure I still have some 5" up in the attic.
The original floppies sold in the 80's do last. Those made by the end of the 90's and later are terrible. Impossible to get one that lasts a month without an error. A box of ten has some that error out on format.
anyway, most people do not have floppy reader. or interface to make it work. one can still buy an usb 3.5 floppy, but 5" or 8" so, yes, when trying to make long term archives (one can't anymore speak of backup) have to think of the medium for example, flash disks are known to have a short life (including ssd), I speak of time life, not number of cycles. Hard drives have good life expectancy but can dye unexpectedly. once it was said than archive medium have to be updated at a bare minimum every 5 years. It's pretty easy as in this time price drop and size grow. one can find dvd or BD said to be worth 1000 years. take appointment in 3000 to know :-)) https://www.nierle.com/en/article/32558/Verbatim_M-DISC_Blu-ray_BD-R_25_GB_-... jdd -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 2016-10-04 16:30, jdd wrote:
Le 04/10/2016 à 14:39, Carlos E. R. a écrit :
I made a review of several compressors, none is good enough. Almost none have error recovery.
they should have:
http://www.7-zip.org/recover.html
at least the files out of the damaged part should be recoverable
Yes, I have to test it. If I recall correctly it was not proven reliable. One that is proven is rar, but it does not support all Linux attributes. Oh, looking at your link, it is terrible. With rar you simply tell the command to repair the archive and that's it.
Yes, that is true. Well, floppies were never meant for long-term storage, although I'm sure I still have some 5" up in the attic.
The original floppies sold in the 80's do last. Those made by the end of the 90's and later are terrible. Impossible to get one that lasts a month without an error. A box of ten has some that error out on format.
anyway, most people do not have floppy reader. or interface to make it work.
And those that have it, like me, would not think of doing a hard disk backup that way. LOL.
so, yes, when trying to make long term archives (one can't anymore speak of backup) have to think of the medium
for example, flash disks are known to have a short life (including ssd), I speak of time life, not number of cycles. Hard drives have good life expectancy but can dye unexpectedly.
once it was said than archive medium have to be updated at a bare minimum every 5 years. It's pretty easy as in this time price drop and size grow.
one can find dvd or BD said to be worth 1000 years. take appointment in 3000 to know :-))
Hah! :-)
https://www.nierle.com/en/article/32558/Verbatim_M-DISC_Blu-ray_BD-R_25_GB_-...
Ah. But we still do not have in Linux a backup program that gets close to what PCtools Backup did in the 80's. In this case, perform the backup to supported media with compression and forward recovery bits. See: https://en.wikipedia.org/wiki/Forward_error_correction The idea is that even if the DVD/BD gets scratches and broken sectors, up to a percent, *all* the files can be correctly recovered without errors. See also "par2". (https://en.wikipedia.org/wiki/Parchive). If you want it, build it yourself. -- Cheers / Saludos, Carlos E. R. (from 13.1 x86_64 "Bottle" at Telcontar)
Le 04/10/2016 à 16:58, Carlos E. R. a écrit :
But we still do not have in Linux a backup program that gets close to what PCtools Backup did in the 80's.
I still must have it somewhere :-)) jdd -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 10/04/2016 07:58 AM, Carlos E. R. wrote:
Yes, I have to test it. If I recall correctly it was not proven reliable.
To be perfectly pedantic about it, you can not prove any archive method reliable. It can only be proven un-reliable, and only then if the underlying storage media is reliable. Which does not happen in this world. But lets get real here. The topic is Backup a hard drive before clean install. So you use tar.gz and then compare or run some tests for readability. Then you carry on with your install and restore. Your media or your tar.gz is not likely to go bad in the 90 minutes it takes to re-install. Entire distros use tar.gz as package managers. The whole linux industry runs away from unreliability toward safety, yet you just don't hear anyone howling about tar.zs files. Where are the articles demanding everybody avoid them? How come they aren't deprecated loudly in the press and in distributions? Instead we run headlong into things like BTRFS (100% failure rate for me) and systemd and kde4 (years before it was useable) and other unproven crapware with glee while taunting the stragglers as if they were holocaust deniers. Then you turn around and suggest tar.gz will bit-rot inside of 90 minutes? Come on Carlos. -- After all is said and done, more is said than done.
On 2016-10-04 19:03, John Andersen wrote:
On 10/04/2016 07:58 AM, Carlos E. R. wrote:
Entire distros use tar.gz as package managers.
With checksums. If there is a problem you download it again, so its not an issue.
The whole linux industry runs away from unreliability toward safety, yet you just don't hear anyone howling about tar.zs files. Where are the articles demanding everybody avoid them? How come they aren't deprecated loudly in the press and in distributions?
If you search around a bit you can found them, it is a known issue. I comment about it because I have read about it.
Then you turn around and suggest tar.gz will bit-rot inside of 90 minutes?
I never said that. I said that /IF/ there is an error that corrupts one byte of a compressed tar, with most compressors used you lose the entire archive. -- Cheers / Saludos, Carlos E. R. (from 13.1 x86_64 "Bottle" at Telcontar)
On 10/04/2016 01:34 PM, Carlos E. R. wrote:
I said that /IF/ there is an error that corrupts one byte of a compressed tar, with most compressors used you lose the entire archive.
And it is for that precise reason that I do not use that method for backing up critical data. Not since I got bit on the ass by it once, which is all it takes for me to never use it again. Terabyte drives are under US$50 now so cloning a drive is cheap. Daily backups are cheap and fast now days. I certainly remember when they weren't. Etc. But whatever you do, if it's critical data, don't rely on a compressed tar file. Just IMHO, of course. You do whatever you wish. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Le 05/10/2016 à 07:58, Stevens a écrit :
Terabyte drives are under US$50 now so cloning a drive is cheap. Daily backups are cheap and fast now days. I certainly remember when they weren't. Etc. But whatever you do, if it's critical data, don't rely on a compressed tar file. Just IMHO, of course. You do whatever you wish.
the large files do not compress well anyway (jpg and mpg/mp4 are already compressed). Only text files (.txt) do compress and they a rarely large nowadays jdd -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 2016-10-05 10:24, jdd wrote:
Le 05/10/2016 à 07:58, Stevens a écrit :
Terabyte drives are under US$50 now so cloning a drive is cheap. Daily backups are cheap and fast now days. I certainly remember when they weren't. Etc. But whatever you do, if it's critical data, don't rely on a compressed tar file. Just IMHO, of course. You do whatever you wish.
the large files do not compress well anyway (jpg and mpg/mp4 are already compressed). Only text files (.txt) do compress and they a rarely large nowadays
I want to consider compressed filesystems. btrfs has the feature, but I think it is still considered experimental. Combined with snapshots, it would do a nice backup system. But somehow it doesn't ring well, with btrfs... one seeks ultimate reliability. -- Cheers / Saludos, Carlos E. R. (from 13.1 x86_64 "Bottle" at Telcontar)
On 10/05/2016 01:58 AM, Stevens wrote:
Terabyte drives are under US$50 now so cloning a drive is cheap. Daily backups are cheap and fast now days.
Indeed. "Portable" terabyte drives aren't much more expensive if you want to make off-site copies. And a backup using rsync can make multi-generational copies. See, comprehensively, http://www.mikerubel.org/computers/rsync_snapshots/ -- A: Yes > Q: Are you sure? >> A: Because it reverses the logical flow of conversation. >>> Q: Why is top posting frowned upon? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Stevens wrote:
On 10/04/2016 01:34 PM, Carlos E. R. wrote:
I said that /IF/ there is an error that corrupts one byte of a compressed tar, with most compressors used you lose the entire archive.
And it is for that precise reason that I do not use that method for backing up critical data. Not since I got bit on the ass by it once, which is all it takes for me to never use it again.
Terabyte drives are under US$50 now so cloning a drive is cheap. Daily backups are cheap and fast now days. I certainly remember when they weren't. Etc. But whatever you do, if it's critical data, don't rely on a compressed tar file. Just IMHO, of course. You do whatever you wish.
That's it. Why should one deal with a probable source of errors that's cheap to avoid? - Chris -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 10/04/2016 02:34 PM, Carlos E. R. wrote:
On 2016-10-04 19:03, John Andersen wrote:
On 10/04/2016 07:58 AM, Carlos E. R. wrote:
Entire distros use tar.gz as package managers.
With checksums. If there is a problem you download it again, so its not an issue.
Ahn, no, Carlos, you've confused the issue. You're doing to much of the Marshall McCluhan "Medium *IS* the message" thing. Checksum on download of a ISO or RPM or whatever that contains a TAR or CPIO or whatever package that has been compressed served another purposes well. Not just 'is it corrupted in transmission" but 'is the copy you got the one that packagers intended or one that was created by hackers'. A package can have internal checksums for its segments (and co can a file compression method for that matter) ... No, wait ... <quote src="https://en.wikipedia.org/wiki/Gzip"> https://en.wikipedia.org/wiki/Gzip </quote> What is it you are gzipping? You may also not be aware of it but CRC-32 can do some error correction as well as detection. Of course you policy for this depends on many things. In IPV4 there is a CRC on the header but the stack just discards erroneous frames and ask for retransmission. This policy is based on the relative cost of processing vs the cost of and the reliability of the network, and in IPV6 the whole issue of frame header CRC has been discontinued. On hard disks the CRC information can and is used to verify the correctness of the data in the sector just read. It can be and is used for error correction as well. I thin this is the the case since back in 1982 I wrote a a low level disk driver for the RL02 on a PDP-11 based V6 UNIX for carrier grade application for a telco that did this. A repeated error caused the corrected data to be re-written elsewhere and the low level disk mapping in the driver taking care of this redirection. This is now normal practice with modern disk drives and is taken care of by the on-baord electronics so that the computer operating system sees an unblemished linear array of sectors no matter how they might be organized at the physical level. Again, this is risk management issue. The computational cost vs the cost of ... well what else could you do, this isn't like the network where you can ask for retransmission. We're going to face the same thing when we have the Interplanetary Internet. The (time) cost of asking for a network packet repeat will be excessive. Perhaps TAR'ing up the whole system or FS and expecting there to be only one or two errors in something that large is what worries you? Well perhaps you shouldn't take that big a bite of the cake. You might read this: https://www.g-loaded.eu/2007/12/01/choosing-a-format-for-data-backups-tar-vs... Which might also lead you to conclude that some other means of making backups or of making archives is needed. And that I can't argue with. -- A: Yes. > Q: Are you sure? >> A: Because it reverses the logical flow of conversation. >>> Q: Why is top posting frowned upon? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 2016-10-05 11:56, Anton Aylward wrote:
On 10/04/2016 02:34 PM, Carlos E. R. wrote:
On 2016-10-04 19:03, John Andersen wrote:
On 10/04/2016 07:58 AM, Carlos E. R. wrote:
Entire distros use tar.gz as package managers.
With checksums. If there is a problem you download it again, so its not an issue.
Ahn, no, Carlos, you've confused the issue. You're doing to much of the Marshall McCluhan "Medium *IS* the message" thing.
Checksum on download of a ISO or RPM or whatever that contains a TAR or CPIO or whatever package that has been compressed served another purposes well. Not just 'is it corrupted in transmission" but 'is the copy you got the one that packagers intended or one that was created by hackers'.
It serves both purposes. An accidental alteration during transmission, and intentional alteration by hackers or something. And yes, accidental alterations do appear now and then in the mail lists. zypper complains, and does not recover automatically, you have to delete the file.
I thin this is the the case since back in 1982 I wrote a a low level disk driver for the RL02 on a PDP-11 based V6 UNIX for carrier grade application for a telco that did this. A repeated error caused the corrected data to be re-written elsewhere and the low level disk mapping in the driver taking care of this redirection.
This is now normal practice with modern disk drives and is taken care of by the on-baord electronics so that the computer operating system sees an unblemished linear array of sectors no matter how they might be organized at the physical level.
XFS is adding checksums for at least metadata sectors, and they are considering data checksums, so that file integrity is guaranteed. Btrfs I think is going a similar route, after all, they share several devs.
Perhaps TAR'ing up the whole system or FS and expecting there to be only one or two errors in something that large is what worries you? Well perhaps you shouldn't take that big a bite of the cake.
The problem you apparently don't understand is that a single error and the whole targz archive is lost. I'm not analyzing anything. I'm not saying that it happens often. I'm simply saying that I do not trust tar.gz for archival and backup, unless you take additional measures. And this is a fact that is well documented. A single byte error and the tgz is lost. You may say that it is very rare to have such an error. Accepted. But if things go wrong and you do get that single byte error, then the whole tgz is lost, big or small. It is a documented fact.
You might read this: https://www.g-loaded.eu/2007/12/01/choosing-a-format-for-data-backups-tar-vs...
I have not read that document, but yes, I'm aware of the advantages of cpio in this respect. -- Cheers / Saludos, Carlos E. R. (from 13.1 x86_64 "Bottle" at Telcontar)
On 10/05/2016 07:41 AM, Carlos E. R. wrote:
This is now normal practice with modern disk drives and is taken care of by the on-baord electronics so that the computer operating system sees an unblemished linear array of sectors no matter how they might be organized at the physical level.
XFS is adding checksums for at least metadata sectors, and they are considering data checksums, so that file integrity is guaranteed. Btrfs I think is going a similar route, after all, they share several devs.
That is serving quite a different purpose. The purpose of the drive's on-board electronics implementing sector look-aside is to, as I said, present a 'clean disk' to the device drivers of the OS. The file system sits above that. Guaranteeing file integrity is about facing different problems. part of the transition from V6 UNIX to V7 UNIX, for example, was to treat the handling of the metadata of the file system differently, simple that the file system of that time was. Writing the structural and the inode information before the data information enabled FSCK to recover file system problems, problems that HAD NOTHING WHAT TO TO WITH THE INTEGRITY OF THE PHYSICAL DISK ITSELF. In a more modern context, different file systems address this in a different way, treating metadata and structural information different from content data, each in their own way. Some permit the metadata to we written to a separate device! For the most part, this is not about the integrity of the physical disk, though that might be a benefit in some ways, though given the ways a modern disk can fail I doubt it. It is about abuse. its for situations where the system crashes, perhaps a power-out, perhaps a wiring fault or disconnect, so that a FSCK can do a recovery. Or better still, the file system can recover itself without an explicit FSCK. Yes it is possible that a disk starts to fail. Its possible that it develops more bad sectors than the are allowed by the reserved look-aside allocation. it is possible that the disk ages or that the atmospheric sealing fails and dust gets in and the error rate skyrockets. Personally I don't think the kind of file system checks you describe or refer to will be much use when that happens. I say this because I have had that level of catastrophe happen and it basically causes the disk to become unresponsive at a more fundamental level. These integrity checks are good and useful, but they serve a different a different purpose. Conflating the two purposes is not merely sloppy thinking but will end up in wasted effort and perhaps even misguided application. All of this is quite beside the point and has noting to do with the stated object of a "Reliable way to backup hard drive before clean install". We've offered many ways to do that and you keep bitching about errors in TARZ. -- A: Yes. > Q: Are you sure? >> A: Because it reverses the logical flow of conversation. >>> Q: Why is top posting frowned upon? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 10/05/2016 07:41 AM, Carlos E. R. wrote:
The problem you apparently don't understand is that a single error and the whole targz archive is lost. I'm not analyzing anything. I'm not saying that it happens often. I'm simply saying that I do not trust tar.gz for archival and backup, unless you take additional measures.
And this is a fact that is well documented. A single byte error and the tgz is lost. You may say that it is very rare to have such an error. Accepted. But if things go wrong and you do get that single byte error, then the whole tgz is lost, big or small. It is a documented fact.
Perhaps its documented by people who don't understand (or perhaps don't use) CRC, but the whole point of CRC is not just an integrity checksum but a error correction code. Yes there are checksums that don't give detailed error information. Yes it may be that the the file is so bit that the correction information is not a lot of use. As JDD said, large files do not compress well anyway and he also said that gigabytes tars are a pain. I put these in the class of "don't go there". If you're TARing up a whole filesystem, rather than using one of the alternative backup or archiving methods we've discussed, and so get into these considerations, then I think you a fool. My base SYSTEM is about 20G. My HOME is made up out of many small file system, apart from movies and music, none larger than 5G so I can backup each onto a DVD. I did use rsync to an off site, low cost repository, but my cable provider charges the earth for the extra bandwidth. Yes I'm changing provider to one that allows 'unlimited after 2am'. Carlos, I think you're getting ridiculous in your defence of that position, ignoring that modern compress uses CRC which is correctable and is not a mere checksum, and throwing up irrelevancies that have nothing to do with this issue. All of this is quite irrelevant to you original stated objective of backing up a hard drive before a clean install. As I said, the alligator in the swamp analogy holds. When you're up to your ass in alligators while draining the swamp its difficult to remember that your purpose was to build a luxury seafront code complex with a 9-hole golf course, tennis courts and spa. -- A: Yes. > Q: Are you sure? >> A: Because it reverses the logical flow of conversation. >>> Q: Why is top posting frowned upon? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 2016-10-05 14:42, Anton Aylward wrote:
On 10/05/2016 07:41 AM, Carlos E. R. wrote:
Carlos, I think you're getting ridiculous in your defence of that position, ignoring that modern compress uses CRC which is correctable and is not a mere checksum, and throwing up irrelevancies that have nothing to do with this issue.
I deny that gzip uses CRC to correct errors.
All of this is quite irrelevant to you original stated objective of backing up a hard drive before a clean install.
targz was mentioned as a possibility, and I contend that no. Simple as that. Of course I would use other methods. I'm aware of them. -- Cheers / Saludos, Carlos E. R. (from 13.1 x86_64 "Bottle" at Telcontar)
On 10/04/2016 01:03 PM, John Andersen wrote:
On 10/04/2016 07:58 AM, Carlos E. R. wrote:
Yes, I have to test it. If I recall correctly it was not proven reliable.
To be perfectly pedantic about it, you can not prove any archive method reliable. It can only be proven un-reliable, and only then if the underlying storage media is reliable. Which does not happen in this world.
But lets get real here. The topic is Backup a hard drive before clean install.
So you use tar.gz and then compare or run some tests for readability. Then you carry on with your install and restore.
Your media or your tar.gz is not likely to go bad in the 90 minutes it takes to re-install.
Entire distros use tar.gz as package managers. The whole linux industry runs away from unreliability toward safety, yet you just don't hear anyone howling about tar.zs files. Where are the articles demanding everybody avoid them? How come they aren't deprecated loudly in the press and in distributions?
Instead we run headlong into things like BTRFS (100% failure rate for me) and systemd and kde4 (years before it was useable) and other unproven crapware with glee while taunting the stragglers as if they were holocaust deniers.
Then you turn around and suggest tar.gz will bit-rot inside of 90 minutes?
Come on Carlos.
Very! Well! Said!, John, although as you say, you're being pedantic, which is usually something people expect of me :-) But let me say it more briefly 1 Its about risk management. That is the most important thing and should be of overarching concern. It should be the driver that led you to consider the upgrade in the first place. 2 Backups are different from archives. if you don't understand that then you will get yourself mightily confused. -- A: Yes. > Q: Are you sure? >> A: Because it reverses the logical flow of conversation. >>> Q: Why is top posting frowned upon? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 2016-10-05 11:16, Anton Aylward wrote:
On 10/04/2016 01:03 PM, John Andersen wrote:
Then you turn around and suggest tar.gz will bit-rot inside of 90 minutes?
Come on Carlos.
Very! Well! Said!, John, although as you say, you're being pedantic, which is usually something people expect of me :-)
But let me say it more briefly
Look again at my answer to that, consider the emphasis, and then at Stevens answer. -- Cheers / Saludos, Carlos E. R. (from 13.1 x86_64 "Bottle" at Telcontar)
On 10/05/2016 05:49 AM, Carlos E. R. wrote:
On 2016-10-05 11:16, Anton Aylward wrote:
On 10/04/2016 01:03 PM, John Andersen wrote:
Then you turn around and suggest tar.gz will bit-rot inside of 90 minutes?
Come on Carlos.
Very! Well! Said!, John, although as you say, you're being pedantic, which is usually something people expect of me :-)
But let me say it more briefly
Look again at my answer to that, consider the emphasis, and then at Stevens answer.
You said <quote
I said that /IF/ there is an error that corrupts one byte of a compressed tar, with most compressors used you lose the entire archive. </quote>
You seem to miss out on that "one bit" being correctable by CRC Yes, un-tarz doesn't do that but there are tools that do.. And you never got to my conclusion that perhaps TAR'ing/ziping is not the way to go given alternatives. As discussed, one alternative might be a new disk. Whether for backup or migration (which I've done since I use LVM, but you could use rsync) would be up to you. Disks have lifetimes so getting a new disk periodically and migrating is a goo strategy anyway :-) I think you're being a bit of an inflexible stick-in-the-mud by refusing to acknowledge (a) TARZ has CRC and is error correctable using suitable tools and (b) there are alternatives to achieve your objectives of a newly install OS & system while preserving your data.. You're obsessing about a particular method while failing to considerer the overall objectives. The dictum about alligators in the swamp comes to mind. -- A: Yes. > Q: Are you sure? >> A: Because it reverses the logical flow of conversation. >>> Q: Why is top posting frowned upon? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 10/05/2016 07:42 AM, Carlos E. R. wrote:
On 2016-10-05 12:13, Anton Aylward wrote:
You seem to miss out on that "one bit" being correctable by CRC
No if the only thing you have got is the tgz archive. Again, this problem is documented.
"behaves awfully:, "Gives a lot of hastle in doing a recovery" and "its too difficult for me" and even "I don't know how so I don't believe it can be done" are all common cases. The thing is that in many cases it can be done. The bigger the file -- and you're talking about doing whole filesystems, the less likely or the more hassle it is going to be. And anyway, since many of the files are likely to be compressed already, as JDD points out, an error in them renders them unusable and that has nothing to do with TAR or your compression algorithm. There would be little point in my compressing /home/anton as whole file system; the BIG parts of that tree are the music and the videos, which amount to over 80G, compared to the rest of the tree that amounts to about less that 25G. That being said, I use LVM and have a number of LVs and file systems to make that 25G manageable for backing up onto DVD. The DVDs are then mountable as ISO file systems and can be treated as archives to extract specific file or to do a complete snapshot restore. LVM lets me take snapshots to make the DVD from :-) In terms of acheive the stated objective of find a "Reliable way to backup hard drive before clean install" this sounds a good strategy. Saying "it can't EVER" be done only needs one case of it being done to prove you wrong. http://www.gzip.org/recover.txt Let us not forget, gzip work on single files, not groups of files. Plain old zip (and pkzip) operate on groups of files and have the concept of the archive built-in. When you're dealing with individual files the issue of CRC for individual files rather than the whole archive changes the landscape. Let us not forget that many are switchng to the use of XZ compressions for a variety of reasons. It is a block compression algorithm and if there is an unrecoverable error then only that one file wherein it occurred is affected, all the rest are unaffected. And finally.. Lets not forget that a CRC is not a checksum. A CRC is about error correction. -- A: Yes. > Q: Are you sure? >> A: Because it reverses the logical flow of conversation. >>> Q: Why is top posting frowned upon? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Le 05/10/2016 à 13:42, Carlos E. R. a écrit :
On 2016-10-05 12:13, Anton Aylward wrote:
You seem to miss out on that "one bit" being correctable by CRC
No if the only thing you have got is the tgz archive. Again, this problem is documented.
not completely true try... I have a collection of books in epub form, easy source for testing (do not extract in the same folder!!) simply with dolphin, right clic "compress"... change digit with ghex compress in tar.gz: if I change one digit (4 bits, an hex number) and save, the file open perfectly (not verified if the one file affected is readable). If I remove one hex number, the archive can no more be opened with ark. So the length of the file is the culprit compress as zip file: the archive opens in the two situations above jdd -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 10/04/2016 10:03 AM, John Andersen wrote:
To be perfectly pedantic about it, you can not prove any archive method reliable.
Of course it can be proven reliable. That statement along with the rest of your email complaining about systemd, kde4 and btrfs is just trollish at best. It can't even be taken remotely seriousy. On 10/04/2016 10:03 AM, John Andersen wrote:
It can only be proven un-reliable, and only then if the underlying storage media is reliable. Which does not happen in this world.
Oh, so in science something can only be proven unreliable. How can you now talk about the storage media being reliable? According to your logic that's not possible. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On October 17, 2016 12:04:55 AM PDT, sdm <fastcpu@openmailbox.org> wrote:
On 10/04/2016 10:03 AM, John Andersen wrote:
To be perfectly pedantic about it, you can not prove any archive method reliable.
Of course it can be proven reliable. That statement along with the rest
of your email complaining about systemd, kde4 and btrfs is just trollish at best. It can't even be taken remotely seriousy.
On 10/04/2016 10:03 AM, John Andersen wrote:
It can only be proven un-reliable, and only then if the underlying storage media is reliable. Which does not happen in this world.
Oh, so in science something can only be proven unreliable. How can you now talk about the storage media being reliable? According to your logic that's not possible.
You seriously need to learn to read. -- Sent from my Android phone with K-9 Mail. Please excuse my brevity. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Carlos E. R. wrote:
On 2016-10-04 14:03, Per Jessen wrote:
Carlos E. R. wrote:
Not tar. The problem is gz.
Oh? Well, then use another compressor. xz is good.
I made a review of several compressors, none is good enough. Almost none have error recovery.
Okay. I use tar to disk and to tape, it has never been an issue for me. -- Per Jessen, Zürich (15.6°C) http://www.cloudsuisse.com/ - your owncloud, hosted in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 2016-10-04 16:42, Per Jessen wrote:
Carlos E. R. wrote:
On 2016-10-04 14:03, Per Jessen wrote:
Carlos E. R. wrote:
Not tar. The problem is gz.
Oh? Well, then use another compressor. xz is good.
I made a review of several compressors, none is good enough. Almost none have error recovery.
Okay. I use tar to disk and to tape, it has never been an issue for me.
That is so, that way there is no issue. -- Cheers / Saludos, Carlos E. R. (from 13.1 x86_64 "Bottle" at Telcontar)
Quoting Per Jessen <per@computer.org>:
Carlos E. R. wrote:
On 2016-10-04 11:47, Per Jessen wrote:
Carlos E. R. wrote:
On 2016-10-04 08:14, Per Jessen wrote:
John Andersen wrote:
On 10/03/2016 11:59 AM, Carlos E. R. wrote: > Reason: a few (or > one) byte error makes the tar unrecoverable.
In reality, Carlos, exactly how many of these have failed for you?
Yes, I have to wonder that too.
It is rare, but it is documented. Media failures are possible, and if one happens in the middle of a tar.gz you lose it completely.
Sure, but every archiving tool is vulnerable to media and network failures, tar no more so than any other.
Not tar. The problem is gz.
Oh? Well, then use another compressor. xz is good.
All compression types are vulnerable to a single corrupted bit making the rest of the file unusable. Compression removes all redundancy. Unless there is error correction (additional bits), flipping a single bit is unrecoverable. Codes are variable length, so there is no way to know where the next valid codes starts. JPEG uses periodic reset codes that lets decoding/decompression to restart but everything between a corrupt bit and the next reset is lost. Archives usually have a table of contents, so the most a single bit error can corrupt is a file. Jeffrey -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 2016-10-05 04:13, Jeffrey L. Taylor wrote:
Quoting Per Jessen <per@computer.org>:
Carlos E. R. wrote:
Not tar. The problem is gz.
Oh? Well, then use another compressor. xz is good.
All compression types are vulnerable to a single corrupted bit making the rest of the file unusable. Compression removes all redundancy. Unless there is error correction (additional bits), flipping a single bit is unrecoverable.
Exactly. But a pkzip archive is less vulnerable than a tar.gz archive, because the corruption affects a single file. Rar adds, on request, those error correction bits. There is another open format that does similarly, but I don't remember which.
Codes are variable length, so there is no way to know where the next valid codes starts. JPEG uses periodic reset codes that lets decoding/decompression to restart but everything between a corrupt bit and the next reset is lost.
Good idea.
Archives usually have a table of contents, so the most a single bit error can corrupt is a file.
except that a tar.gz compresses the already tabled archive. A single bit bad, and you can not recover the archive itself, so no table of contents, nothing. -- Cheers / Saludos, Carlos E. R. (from 13.1 x86_64 "Bottle" at Telcontar)
On 10/04/2016 10:13 PM, Jeffrey L. Taylor wrote:
Archives usually have a table of contents, so the most a single bit error can corrupt is a file.
Indeed, but calling TAR an archive is a misnomer. Its a streaming protocol intended for use with tapes, not 'direct access, random seek' devices like disk files. Recall the issue: a backup is not an archive. Yes, if you have a TAR file on a disk (compressed or not) you can use a tool - I use Konkeror - to open it up and at a application level read through the whole stream and construct that index, but that index, that image of the file as a file system hierarchy, is in the application, not in the file itself. TAR is NOT an archive format. -- A: Yes. > Q: Are you sure? >> A: Because it reverses the logical flow of conversation. >>> Q: Why is top posting frowned upon? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Le 05/10/2016 à 12:18, Anton Aylward a écrit :
Yes, if you have a TAR file on a disk (compressed or not) you can use a tool - I use Konkeror - to open it up and at a application level read through the whole stream and construct that index, but that index, that image of the file as a file system hierarchy, is in the application, not in the file itself.
yes, and gigabytes tars are a pain
TAR is NOT an archive format.
exactly jdd -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Quoting IEEE alias <jeff.taylor@ieee.org>:
I am planning on a clean install (going from 13.1 to 42.2 Beta) on the root partition, leaving the home partition alone. I'm not doing complete backups because of limited space on the backup drive. I've room for a single full backup, both partitions. What is a reliable way?
"cp -a / /media/LinuxBackup/full" is what occurs to me. Is there a better way?
Thanks to all who weighed in. Backup may have been misleading. This is a one time snapshot, archive, whatever. No updates, so rsync's incremental and/or differential backup space saving isn't needed and won't be used. Rsync has other features that look useful, e.g. "rsync -x" for each disk partition. That way /proc and other virtual filesystems won't be copied. The backup drive has space to do a complete image of the hard drive. However, I'm going to change partition sizes so that's not desireable. One person's comment about making a copy of the MBR set me to thinking. The only scenario where I might need to restore the MBR is if the clean install goes totally awry. I don't think it likely, but would probably do a clean install of 13.1 in that case. An archiver (tar = tape archiver) and a compression utility add complexity and additional failure modes to no benefit that I can see. This needs to be done once, correctly, so simplicity is key. Booting from a Live CD or rescue disk looks like a very good idea. Moving the dot files in home directories also looks like a good idea. It's more work to merge them, but avoids sticking with archaic formats. RPM is good about making copies of system programs, but user configurations are a crap shoot. I've been bitten by this in the past and had forgotten. The whole point of this exercise is to not exclude any file. Often some vital file has moved from where it was being backed up to a directory where it wasn't. I have backups with every file I think is important, but I thought so several times in the past and have been proven wrong. Thanks to all, Jeffrey -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Have you seen the 'copy/move with verify' variants? http://www.ijntema.com/mediawiki/index.php/Main_Page#Secure_Copy_Tools -- A: Yes. > Q: Are you sure? >> A: Because it reverses the logical flow of conversation. >>> Q: Why is top posting frowned upon? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
participants (12)
-
Andrei Borzenkov
-
Anton Aylward
-
Carlos E. R.
-
Chris
-
David T-G
-
Greg Freemyer
-
jdd
-
Jeffrey L. Taylor
-
John Andersen
-
Per Jessen
-
sdm
-
Stevens