-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Registration Account wrote:
This is the fundamental concept, we backup in a certain was so that if there is a data failure we can restore. If you need to restore a system to a particular date,and you use incremental backups The first part of the restoration is to get the backup file that we first complete as it contains every file. The next setup is to get hold of the incremental backup of the point in times the original full backup files were taken and when it is run it will over-right all files from base line full functionality to current required date.
The expression "we backup so that we CAN restore" is covered more precisely in data security. The expression is based upon the issue if you are not correctly backing up you you can Never recover and hence the money you put into back is useless. The expression is timeless and covered in detail in data centre which have to be able to loose all data and then restore to a designated point in time. Scott :-) Carlos E. R. wrote:
The Wednesday 2007-05-16 at 08:36 +1000, Registration Account wrote:
You only back or-order to restore - How do you fully restore and incremental backup if you loose the first file? Scott Sorry, I don't understand what you say :-?
What is "back or-order to restore"?
Agreed, but there is a bit more to it than that. There are crudely two cases, the home user case, and the business case. (This is a very crude distinction). The backup strategy should be based on how one intends to restore the system. It is of no value if you can successfully save and restore operationally useless data. It is not just how you backup it is also what you backup, and thought needs to be given about what you backup and why you are backing it up. This also means that data management strategies should facilitate backup and restore procedures. The classic backup everything in sight approach while simple is probably rarely appropriate for most cases. One first has to look at the scenarios in which one has to restore.... a) The machine is physically unavailable. This can be due to hardware failure, physical destruction, or it has been half inched (stolen). b) Software failure due to malware, software bugs, user error, and data corruption. (One can get the latter as a result of a of course). Most people design their disaster recovery program around a, and tend to ignore b. However this classic backup technique does not give anything but minimal protection against b, and b is rather more frequent than most people are prepared to acknowledge. All you likely to recover is the same mangled set of data just before everything went pear shaped. It is probably not a bright idea to restore something only for everything to fall over again. Data integrity strategies such as raid or drive synchronisation are equally vulnerable to b. A differential (rather than incremental) strategy can protect against b to some extent because one gets a historical perspective on the status changes on a machines and ones has a better chance of restoring to a last good state, and plotting a recovery path so that good changes are not lost with the bad. However, full recovery can be time consuming especially when one has no idea when things started going pear shaped. It can take a bit of time for a problem to start and the effects to be discovered. The main problem with an incremental approach is that one can acquire a lot of unrequired data that can slow this process considerably. Either strategy is more of shotgun rather than a surgeons knife approach. It does help in the business case if database based systems are not only designed to backup current record status, but the transactions that lead to that status, and some kind of document management system is in place. The data can be dumped in an application specific format, and stored to appropriate media. Any backup strategy then becomes more application focused than system focused, and based on the history of application usage. (Which can make it easier to identify points of failure). One can then break down backups to system focused and application focused requirements. In the business case the initial objective is get the system operational ASAP. On could argue at its simplest user data on a system is in three different states (one can have more complex definitions of course) ... i) Currently Active ii) Recently Active iii) Historical (retained for legal or policy reasons). Other data should be archived to media. (A recent survey suggested that up to 50% of data on a system will never be accessed again). One does not need to restore everything immediately, merely that data which gets the business operational. As ii) and iii) are usually going to be significantly larger than i) it is best to prioritise i) and have it as a discrete media set. One also really needs to only backup religiously i), as data in ii) and iii) by definition will change little if at all and need a different backup policy. So one has a further breakdown of application focused backup by data state... I have never really understood why some people insist on backing up the OS structures in their entirety on a regular basis. One can either have a baseline image using a tool like dd, Ultris, or ghost, or just backup the configuration files and a record of applications installed. In a b type failure it is probably better to install a clean OS and application set than restore the old setup. In case a) it is usually unlikely one will end up with the same hardware configuration so an OS build from scratch would make more sense. So a system focused backup would tend to be more based on backing up configuration data rather than the system OS. This is a more sophisticated and complex approach than that required by a home user, and the home user is mostly poorly served by backup software. However some of the principles still apply. The home user also may have a further problem in that for those who do a lot of multi-media work file sizes can be massive, and appropriate removable media devices do not exist. Replication to drives based on the same machine is problematic, as is the caddy drive approach. One can encourage people to backup to media personal projects before further work is performed on the project (but how many take any notice), and most tools for writing data to DVD/CD are pretty clunky on both Windows and Linux platforms which does not really encourage the user to perform this task. What is probably needed is a simple tool which saves the changes to the system onto media after a days work or on request. (And flags things like iso images as needing copying separately). (After testing out KDar/dar, star and tar I have found a way of working with tar and DVD/CDs. Dar and star have been proved to be inadequate for my purposes for different reasons). I have been following Carlos's examination of par and media integrity checking with some interest but personally think media data integrity is probably not as significant an issue on optical media as the rather erratic stability of device support on both Windows and Linux platforms. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.5 (GNU/Linux) Comment: Using GnuPG with SUSE - http://enigmail.mozdev.org iD8DBQFGTZm9asN0sSnLmgIRAm4yAKCoxuMYpXjv81HweIQVsDL77VcZlACgscIs 0gPdS3B6KJA6co/fuTtGnBw= =m1s4 -----END PGP SIGNATURE----- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org