Anton Aylward wrote:
On 09/27/2015 03:53 AM, Linda Walsh wrote: Linda, you've made a generalization that isn't valid.
Not everyone uses the same backup strategy.
Wrong and that's not what I do. I use the utility designed to go unix FS's
Yes, if your backup is to convert all the files into a tarball and write that out to long term media, you are correct.
Not so primitive -- incremental backups aren't supported by many tars. LT media? I backup to RAID, so a restore of a FS happens at about 200-400MB/s. I.e. I scale my backup media to the size of what I am backing up. As disk sizes have grown, the need for faster backup/restore has increased at the same time.
But some people simply do disk to disk and archive the disk.
Some tape methods preserve the gaps in the file. Its one thing to dump your database to text file, a series of SQL statements, and back that up, but some people quite literally back up the database.
A DB doesn't take 4kb/datum, so it's not the same thing.
You can argue that there are modes of backing up that convert this to actual space, which is why you should dump files and backup the dump. But there are backup tools like rsync which honour the preserve the sparseness.
Not going to argue that. xfsdump and tar can both preserve sparse files.
We've long since established that not everyone runs their system the way you do, Linda. Please don't assume your way is the only way.
Never said it was. But the original point I made was about poor designs often being at fault for inefficiencies. If someone chooses a poor design for backup -- the same applies. I've tried other backup solutions and ended up returning them for money back when they couldn't restore in a reasonable time. One solution that was great in regards to the granularity of backup -- each change to a file was backed-up, did a file system restore at 100K/s... on a 1G filesystem... would have taken 3-4 days to restore 1 partition. Others have had similar. If I can't restore the lost files within a reasonable time -- minutes to hours for a full restore, it's pointless. Also, I use different methods for different media types -- which is why I don't want to combine "/var, /(root), /usr, /home...etc. They have different types of needs in regards to frequency of update and backup. The only generalization I used was that if an intelligent backup strategy was used, then the problems of a 4k-block size could be minimized. If you can prove a counter case to that, feel free to call my generalization invalid. It's the same with many solutions these days -- some through faster processors and more space at solutions to avoid the cost of better design. They get what they pay for. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org