On 09/27/2015 06:34 PM, Linda Walsh wrote:
Anton Aylward wrote:
On 09/27/2015 03:53 AM, Linda Walsh wrote: Linda, you've made a generalization that isn't valid.
Not everyone uses the same backup strategy.
Wrong and that's not what I do.
I am NOT wrong when I say "Not everyone uses the same backup strategy."
I use the utility designed to go unix FS's
Yes, if your backup is to convert all the files into a tarball and write that out to long term media, you are correct.
Not so primitive -- incremental backups aren't
supported by many tars.
They only need to be supported by ONE version of tar its its the version that comes with Linux, with Suse, RH, Mageia. I don't know about the non-RPM systems like ubuntu. See http://www.gnu.org/software/tar/manual/html_node/Incremental-Dumps.html and http://www.unixmen.com/performing-incremental-backups-using-tar/ and http://paulwhippconsulting.com/blog/using-tar-for-full-and-incremental-backu... and http://www.tldp.org/LDP/solrhe/Securing-Optimizing-Linux-RH-Edition-v1.3/cha... I never said TAR was 'primitive'. The tar command may be a primitive aka fundamental, in the way that many binaries in /usr/bin are 'primitives' around which we construct more complex scripts.
But some people simply do disk to disk and archive the disk.
You can argue that there are modes of backing up that convert this to actual space, which is why you should dump files and backup the dump. But there are backup tools like rsync which honour the preserve the sparseness.
Not going to argue that. xfsdump and tar can
both preserve sparse files.
Indeed. So does cpio, but it does it badly. It writes out sparse files as blocks of zeros, but the "--sparse" option on reading in r4stores them to be sparse.
We've long since established that not everyone runs their system the way you do, Linda. Please don't assume your way is the only way.
Never said it was. But the original point I made
was about poor designs often being at fault for inefficiencies.
There's a joke about a a fighter pilot and transport pilot that tell a lot about attitudes towards 'efficiency'. Often 'efficiency' is confused with speed. if I want speed I'd get more memory and faster CPU with more cores, a faster SSD, or perhaps not even use Linux! IBM has some damn fast database system that go back long before SQL and are used by banks and airlines. Why do you think your credit card swipes come back 'verified' so fast? That's not MySQL you're seeing! That's not RubyOnRail or PHP. "Efficient" can also mean maintainable, sometimes under adverse conditions by poorly trained people. The DC9 has a reputation of being serviceable in jungle considerations with the most primitive of machine shops making spare parts.
If someone chooses a poor design for backup -- the same applies.
its no a poor design, its either adequate or not. I keep saying Context is Everything and it is. My context is not yours. I've designed my system to be able to be quickly backed up or restored using DVDs. Perhaps your multi-terabyte databases would prove inefficient if you tried backing them up in 5G chunks! But in my *context* its quick, easy, and because its not inconvenient, it gets done, so its an efficient /system/. Context is Everything
I've tried other backup solutions and ended up returning them for money back when they couldn't restore in a reasonable time.
And there you have it. its the ability to *restore* that is the crucial issue. For me, the DVDs have a file system image. I can simple 'cp' or 'rsync' any file, set of files or directory or tree that I want, and even do it by patterns, date and any variant of pick-and choose using 'find'. Context is Everything
Also, I use different methods for different
media types -- which is why I don't want to combine "/var, /(root), /usr, /home...etc. They have different types of needs in regards to frequency of update and backup.
Yes, that is important. A full system image backup can be very inflexible. it can also be a disaster, as one of my clients once found, if you make a mistake in the command line doing and restore!
The only generalization I used was that if an intelligent backup strategy was used, then the problems of a 4k-block size could be minimized. If you can prove a counter case to that, feel free to call my generalization invalid.
There are many tools where the 4K vs 512 is not the issue. using 'tar' to a tape, the blocking issue is a completely different matter! Its about buffering, not space allocation! What? tape? Well yes, 512 blocks eat a a lot of tape. header and trailed/checksum for each.
It's the same with many solutions these days -- some through faster processors and more space at solutions to avoid the cost of better design. They get what they pay for.
There you go again, assuming that 'efficient' means 'speed'. -- A: Yes. > Q: Are you sure? >> A: Because it reverses the logical flow of conversation. >>> Q: Why is top posting frowned upon? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org