[opensuse] Backup Suggestions?
Hi Folks, I've been using rdiff-backup for many years, both at home and in a client/server setup at work. But it's getting a bit long in the tooth and I get occasional crashes seeming caused by a problem with gzip.py. (IOError: Negative seek in write mode) It seems that rdiff-backup hasn't been updated since 2009, maybe it's time for me to switch? But to what? I'm dong backups to disk, rather than to tape, and the disks are either local or remote. The candidate system should back up to a regular filesystem, and be accessible just like the source. The backup should be an up-to-date copy of the source at the time the backup was run. It should do incremental backups, and be able to restore to any given time the backup was run. For instance, if the process is run on a daily basis, you should be able to recover a file/directory that was removed 30 or more days ago. I've recovered files that had been removed two years ago with rdiff-backup. It's not necessary for individual users to recover their own stuff, it's enough for root to do it for them. I know about rsync, but it doesn't do the history thing, as far as I know. Can anyone offer any suggestions? Thanks, Lew -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 12/11/2015 06:40 PM, Lew Wolfgang wrote:
I know about rsync, but it doesn't do the history thing, as far as I know.
What do you mean by that "history thing"? Yes, rsync can save "generations" of a file.
From the man page <quote> -b, --backup With this option, preexisting destination files are renamed as each file is transferred or deleted. You can control where the backup file goes and what (if any) suffix gets appended using the --backup-dir and --suffix options. </quote>
I would suggest that the suffix be altered on each run so that it is different for each run. The date/time is probably the best. Something like date +"%Y%m%d%H%M%S" or perhaps date +"%Y%m%d%H%M%S%s" -- A: Yes. > Q: Are you sure? >> A: Because it reverses the logical flow of conversation. >>> Q: Why is top posting frowned upon? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 12/11/2015 06:28 PM, Anton Aylward wrote:
On 12/11/2015 06:40 PM, Lew Wolfgang wrote:
I know about rsync, but it doesn't do the history thing, as far as I know. What do you mean by that "history thing"?
Yes, rsync can save "generations" of a file.
From the man page <quote> -b, --backup With this option, preexisting destination files are renamed as each file is transferred or deleted. You can control where the backup file goes and what (if any) suffix gets appended using the --backup-dir and --suffix options. </quote>
I would suggest that the suffix be altered on each run so that it is different for each run. The date/time is probably the best.
Something like
date +"%Y%m%d%H%M%S"
or perhaps
date +"%Y%m%d%H%M%S%s"
I didn't explain myself very well. By "history" I mean that you can restore a file or a whole hierarchy by just specifying what date in the past you want to restore. rdiff-backup made this easy and automatic. Regards, Lew -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Am Freitag, 11. Dezember 2015, 15:40:12 schrieb Lew Wolfgang:
[...] Can anyone offer any suggestions?
I am using storeBackup. It uses hardlinks for deduplication, deletes backups after some time (or number of backups) and offers optional compression. Personally, I like that storeBackup slices big VM harddisk files into smaller pieces and deduplicates them effectively reducing the size needed for a subsequent backup. Official packages are available for openSUSE at http://software.opensuse.org/ Gruß Jan -- Everything needs a little oil now and then. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 12/12/2015 01:58 AM, Jan Ritzerfeld wrote:
Am Freitag, 11. Dezember 2015, 15:40:12 schrieb Lew Wolfgang:
[...] Can anyone offer any suggestions? I am using storeBackup. It uses hardlinks for deduplication, deletes backups after some time (or number of backups) and offers optional compression. Personally, I like that storeBackup slices big VM harddisk files into smaller pieces and deduplicates them effectively reducing the size needed for a subsequent backup.
Official packages are available for openSUSE at http://software.opensuse.org/
Hi Jan, storeBackup sounds really interesting! Do you know how well it works when backing up to a remote machine? NFS? sshfs? Rdiff-backup uses a ssh tunnel for remote backups. I have two use cases: 1. Backup to other disk(s) on the same machine. 2. Backup of many machines to a common backup server. I've been using cron'ed scripts on the common server to reach out to all the remote machines (using ssh) to run the rdiff-backup incremental dumps. Root access to the remote machines is granted via public pki keys and so is as secure as the common dump server. Having the dumps coordinated from the main backup server is nice in that the whole process is controlled from one spot and you don't have 20 machines all trying to dump at once, clogging everything up. Regards, Lew -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Am Samstag, 12. Dezember 2015, 09:49:15 schrieb Lew Wolfgang:
Hi Jan,
storeBackup sounds really interesting! Do you know how well it works when backing up to a remote machine?
I use a luks encrypted iSCSI volume on my NAS as the target. :)
NFS? sshfs? Rdiff-backup uses a ssh tunnel for remote backups.
NFS works and sshfs should work. In case of sshfs the FAQ says that you should perform the deduplication (by creating hardlinks) directly on the target server because of performance: http://www.nongnu.org/storebackup/en/node85.html#myfaqSSH I am not quite sure whether sshfs supports special files like sockets, pipes, and so on. However, storeBackup is able to archive these types of files using cpio or tar. Furthermore, the slicing of big VM files and the deduplication directly on the target server might not be compatible. IIRC you have to pick one.
I have two use cases:
1. Backup to other disk(s) on the same machine.
The easy one... :)
2. Backup of many machines to a common backup server.
I've been using cron'ed scripts on the common server to reach out to all the remote machines (using ssh) to run the rdiff-backup incremental dumps. Root access to the remote machines is granted via public pki keys and so is as secure as the common dump server. Having the dumps coordinated from the main backup server is nice in that the whole process is controlled from one spot and you don't have 20 machines all trying to dump at once, clogging everything up.
Cool! If the clients should mount the target server only on demand, there is the script storeBackupMount that mounts, starts the backup and unmounts. Besides, this scenario might be better handled by a Bacula, Amanda, or another "enterprise level" backup system specially designed for backing up many machines over the network. Gruß Jan -- If you knew what you were doing you'd probably be bored. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 11/12/15 23:40, Lew Wolfgang wrote:
Hi Folks,
I've been using rdiff-backup for many years, both at home and in a client/server setup at work. But it's getting a bit long in the tooth and I get occasional crashes seeming caused by a problem with gzip.py. (IOError: Negative seek in write mode) It seems that rdiff-backup hasn't been updated since 2009, maybe it's time for me to switch? But to what?
I'm dong backups to disk, rather than to tape, and the disks are either local or remote. The candidate system should back up to a regular filesystem, and be accessible just like the source. The backup should be an up-to-date copy of the source at the time the backup was run. It should do incremental backups, and be able to restore to any given time the backup was run. For instance, if the process is run on a daily basis, you should be able to recover a file/directory that was removed 30 or more days ago. I've recovered files that had been removed two years ago with rdiff-backup. It's not necessary for individual users to recover their own stuff, it's enough for root to do it for them. I know about rsync, but it doesn't do the history thing, as far as I know.
Can anyone offer any suggestions?
I backup to a pair of disks in an external enclosure formatted as a btrfs raid1 array. I have written a python script that uses rsync to do the daily backup, followed by a btrfs snapshot of that backup. A variable in the script decides when to delete old, out-of-date snapshots (I use 90 days here). Rsync gives me the granular control I want, by using .rsync-filter files scattered through the folder hierarchy to fine tune what to include/exclude in the backup. I backup stuff like Documents, Pictures, Music to separate destination folders. The btrfs snapshots on the destination are named with a date and time string, which makes them easy to find. You are welcome to a copy, if you think it would help. Bob -- Bob Williams System: Linux 4.1.13-5-default Distro: openSUSE 42.1 (x86_64) with KDE Development Platform: 4.14.10
On Fri, Dec 11, 2015 at 6:40 PM, Lew Wolfgang
Hi Folks,
I've been using rdiff-backup for many years, both at home and in a client/server setup at work. But it's getting a bit long in the tooth and I get occasional crashes seeming caused by a problem with gzip.py. (IOError: Negative seek in write mode) It seems that rdiff-backup hasn't been updated since 2009, maybe it's time for me to switch? But to what?
I'm dong backups to disk, rather than to tape, and the disks are either local or remote. The candidate system should back up to a regular filesystem, and be accessible just like the source. The backup should be an up-to-date copy of the source at the time the backup was run. It should do incremental backups, and be able to restore to any given time the backup was run. For instance, if the process is run on a daily basis, you should be able to recover a file/directory that was removed 30 or more days ago. I've recovered files that had been removed two years ago with rdiff-backup. It's not necessary for individual users to recover their own stuff, it's enough for root to do it for them. I know about rsync, but it doesn't do the history thing, as far as I know.
Can anyone offer any suggestions?
Thanks, Lew
I too was a big fan of rdiff-backup. I don't know of an equivalent. I am now using a cloud service: spideroak first 5GB offsite: free 1 TB offsite: $12/month they dedup and compress before determining your storage usage. Just grab the Fedora rpm and install it. Greg -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 12/12/2015 02:04 AM, Greg Freemyer wrote:
I too was a big fan of rdiff-backup. I don't know of an equivalent.
I am now using a cloud service: spideroak
first 5GB offsite: free 1 TB offsite: $12/month
they dedup and compress before determining your storage usage.
Just grab the Fedora rpm and install it.
Greg
Belt AND Suspenders AND Duct-tape guy here. Plus one for SpiderOak. I back up our main server, our main workstations, and laptops, each of which is going to have entire development trees for several different projects. Additionally, various accounting files, /etc directories, and assorted important stuff are backed up via Spideroak. I love the ability to walk back through VERSIONS of any file, and recover changes done months ago. I pay for something like 105GB of space, but in spite of using them for years and I still can't get beyond the 5gig free space allotment, mostly due to the compression and de-duplication they provide. See: https://spideroak.com/faq/what-is-deduplication (Of course nothing I mentioned above about Spideroak is all that unique, lots of services have these features. The KILLER feature is Zero Knowledge. (They couldn't decrypt and hand over my files even if a gun was held to their head). See https://spideroak.com/about/law-enforcement To this I add scheduled automatic backups to a NAS drive in house using a paid package I purchased years ago, BRU from Tollisgroup.com This takes complete backups of machines, and stacks them on the NAS. (Which itself has mirrored drives). In addition I use Unison to sync various work stations development trees with my main server as well as my traveling laptops and between our two office locations. Unison is cross platform so it works on windows as well as opensuse. (Its in opensuse repositories). Some Unison runs are scheduled, others are on-demand. -- After all is said and done, more is said than done. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 12/12/2015 02:04 AM, Greg Freemyer wrote:
On Fri, Dec 11, 2015 at 6:40 PM, Lew Wolfgang
wrote: Hi Folks,
I've been using rdiff-backup for many years, both at home and in a client/server setup at work. But it's getting a bit long in the tooth and I get occasional crashes seeming caused by a problem with gzip.py. (IOError: Negative seek in write mode) It seems that rdiff-backup hasn't been updated since 2009, maybe it's time for me to switch? But to what?
I'm dong backups to disk, rather than to tape, and the disks are either local or remote. The candidate system should back up to a regular filesystem, and be accessible just like the source. The backup should be an up-to-date copy of the source at the time the backup was run. It should do incremental backups, and be able to restore to any given time the backup was run. For instance, if the process is run on a daily basis, you should be able to recover a file/directory that was removed 30 or more days ago. I've recovered files that had been removed two years ago with rdiff-backup. It's not necessary for individual users to recover their own stuff, it's enough for root to do it for them. I know about rsync, but it doesn't do the history thing, as far as I know.
Can anyone offer any suggestions?
Thanks, Lew I too was a big fan of rdiff-backup. I don't know of an equivalent.
I am now using a cloud service: spideroak
Spideroak sounds interesting, but I've got more than 3-TB of stuff on my home system and hundreds of TB at my customer's site. Further, my customer won't allow cloud storage. That being said, I'm going to check out spideroak for a subset of the really important stuff, like those old photos of the Fetching Mrs Wolfgang. Also, after a short exchange on the rdiff-backup mailing list, Joe Steele and Andrea Cozzolino figured out that the main openSUSE repositories don't have the most up-to-date version of rdiff-backup. Two patches have been applied to the version in the "Archiving" repository. The patch: https://build.opensuse.org/request/show/259636 The repo: http://software.opensuse.org/download.html?project=Archiving&package=rdiff-backup It seems to have fixed my problem, which was something about negative seeks in sparse files. Regards, Lew -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
I use "rsnapshot" which is a little like "time machine" for Mac. Same technology as has been mentioned here, hardlinks/rsync/etc but it's easier to configure via a conf file. -e On 12/11/15 18:40, Lew Wolfgang wrote:
Hi Folks,
I've been using rdiff-backup for many years, both at home and in a client/server setup at work. But it's getting a bit long in the tooth and I get occasional crashes seeming caused by a problem with gzip.py. (IOError: Negative seek in write mode) It seems that rdiff-backup hasn't been updated since 2009, maybe it's time for me to switch? But to what?
I'm dong backups to disk, rather than to tape, and the disks are either local or remote. The candidate system should back up to a regular filesystem, and be accessible just like the source. The backup should be an up-to-date copy of the source at the time the backup was run. It should do incremental backups, and be able to restore to any given time the backup was run. For instance, if the process is run on a daily basis, you should be able to recover a file/directory that was removed 30 or more days ago. I've recovered files that had been removed two years ago with rdiff-backup. It's not necessary for individual users to recover their own stuff, it's enough for root to do it for them. I know about rsync, but it doesn't do the history thing, as far as I know.
Can anyone offer any suggestions?
Thanks, Lew
The information contained in this transmission contains privileged and confidential information. It is intended only for the use of the person named above. If you are not the intended recipient, you are hereby notified that any review, dissemination, distribution or duplication of this communication is strictly prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. CAUTION: Intended recipients should NOT use email communication for emergent or urgent health care matters. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
I use rsnapshot here too, both hourly (6x a day) and daily, with a 4-week retention. My rsnapshot.conf: config_version 1.2 snapshot_root /backup/ no_create_root 1 cmd_cp /bin/cp cmd_rm /bin/rm cmd_rsync /usr/bin/rsync cmd_ssh /usr/bin/ssh cmd_logger /usr/bin/logger cmd_du /usr/bin/du cmd_rsnapshot_diff /usr/bin/rsnapshot-diff interval hourly 6 interval daily 7 interval weekly 4 verbose 2 loglevel 3 logfile /var/log/rsnapshot lockfile /var/run/rsnapshot.pid rsync_short_args -rlptgDEvHhXAo rsync_long_args --delete --numeric-ids --relative --delete-excluded include_file /usr/scripts/backup/backup.include exclude_file /usr/scripts/backup/backup.exclude link_dest 1 use_lazy_deletes 1 backup / countryside/ backup.exclude: #exclude - /proc/* - /tmp/* - /backup/* - /dev/* - /sys/* - /mnt/* - lost+found/ - /.journal - /.fsck - /var/lib/named/proc/* - /var/lib/ntp/proc/* - /var/lib/php5/sessions/* - /var/run/* - /var/tmp/* - /var/spool/* - /var/cache/* - /var/lock/* - /run/* - /opt/minecraft/* - ibdata* - ib_logfile* - /data/exclude/* backup.include: #include + /dev/console + /dev/initctl + /dev/null + /dev/zero + / cron entries: #server hard drive 0 */4 * * * /usr/scripts/backup/rsnapshot_hourly.sh | mail -s 'Hourly backup log' myemail@domain.com 30 3 * * * /usr/scripts/backup/rsnapshot_daily.sh | mail -s 'Daily backup log' myemail@domain.com rsnapshot_hourly.sh: #!/bin/bash #Make sure that script isn't already running; die if it is. SCRIPTNAME=`basename $0` PIDFILE=/var/run/${SCRIPTNAME}.pid if [ -f ${PIDFILE} ]; then #verify if the process is actually still running under this pid OLDPID=`cat ${PIDFILE}` RESULT=`ps -ef | grep ${OLDPID} | grep ${SCRIPTNAME}` if [ -n "${RESULT}" ]; then echo "Backup already running! Exiting." exit 255 fi fi #grab pid of this process and update the pid file with it PID=`ps -ef | grep ${SCRIPTNAME} | head -n1 | awk ' {print $2;} '` echo ${PID} > ${PIDFILE} mount | grep backup > /dev/null if [ ! "$?" -eq "0" ]; then echo Mounting Backup volume mount /backup fi mount | grep backup > /dev/null if [ ! "$?" -eq "0" ]; then echo Backup volume not mounted, fail. exit 2 fi echo Starting backup... /usr/bin/rsnapshot hourly echo Backup finished. echo echo df -hx tmpfs echo Unmounting backup volume. umount /backup mount | grep backup > /dev/null if [ ! "$?" -eq "0" ]; then echo Backup volume unmounted. fi if [ -f ${PIDFILE} ]; then rm ${PIDFILE} fi rsnapshot_daily.sh: #!/bin/bash #Make sure that script isn't already running; die if it is. SCRIPTNAME=`basename $0` PIDFILE=/var/run/${SCRIPTNAME}.pid if [ -f ${PIDFILE} ]; then #verify if the process is actually still running under this pid OLDPID=`cat ${PIDFILE}` RESULT=`ps -ef | grep ${OLDPID} | grep ${SCRIPTNAME}` if [ -n "${RESULT}" ]; then echo "Backup already running! Exiting." exit 255 fi fi #grab pid of this process and update the pid file with it PID=`ps -ef | grep ${SCRIPTNAME} | head -n1 | awk ' {print $2;} '` echo ${PID} > ${PIDFILE} mount | grep backup > /dev/null if [ ! "$?" -eq "0" ]; then echo Mounting Backup volume mount /backup/ fi mount | grep backup > /dev/null if [ ! "$?" -eq "0" ]; then echo Backup volume not mounted, fail. exit 2 fi echo Starting backup... /usr/bin/rsnapshot daily echo Backup finished. echo echo df -hx tmpfs echo Unmounting backup volume. umount /backup mount | grep backup > /dev/null if [ ! "$?" -eq "0" ]; then echo Backup volume unmounted. fi if [ -f ${PIDFILE} ]; then rm ${PIDFILE} fi -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Dne Pá 11. prosince 2015 15:40:12, Lew Wolfgang napsal(a):
I'm dong backups to disk, rather than to tape, and the disks are either local or remote. The candidate system should back up to a regular filesystem, and be accessible just like the source. The backup should be an up-to-date copy of the source at the time the backup was run. It should do incremental backups, and be able to restore to any given time the backup was run. For instance, if the process is run on a daily basis, you should be able to recover a file/directory that was removed 30 or more days ago. I've recovered files that had been removed two years ago with rdiff-backup. It's not necessary for individual users to recover their own stuff, it's enough for root to do it for them. I know about rsync, but it doesn't do the history thing, as far as I know.
Can anyone offer any suggestions?
I can recommend duplicity. It is basically Python script with impressive rich options suing rsync and GPG to produce encrypted (encryption of archives is optional) and it keeps incremental backup according to user defined pattern. I use it to backup to various network devices. See http://duplicity.nongnu.org/ (it is available in openSUSE repositories). V. -- Vojtěch Zeisek Komunita openSUSE GNU/Linuxu Community of the openSUSE GNU/Linux http://www.opensuse.org/ http://trapa.cz/
participants (9)
-
Anton Aylward
-
Bob Williams
-
Christopher Myers
-
Emilio Recio
-
Greg Freemyer
-
Jan Ritzerfeld
-
John Andersen
-
Lew Wolfgang
-
Vojtěch Zeisek