I'm looking for a utility to be used for a disk-to-disk backup that recurses through a directory and does the following for each file and subdirectory: 1. If the backup file exists but the original doesn't, the backup is deleted. 2. If the backup file exists and is as new as the original, no copying is done. 3. If the backup file does not exist, the original is copied to the backup. I could certainly write a shell script using "find" for doing this, but I wonder if there's some handy-dandy utility around that already does it for me. Paul
On Wednesday 15 December 2004 05:43 pm, Paul W. Abrahams wrote:
I'm looking for a utility to be used for a disk-to-disk backup that recurses through a directory and does the following for each file and subdirectory:
1. If the backup file exists but the original doesn't, the backup is deleted.
2. If the backup file exists and is as new as the original, no copying is done.
3. If the backup file does not exist, the original is copied to the backup.
I could certainly write a shell script using "find" for doing this, but I wonder if there's some handy-dandy utility around that already does it for me.
Paul
man rsync
On Wed, 15 Dec 2004, Paul W. Abrahams wrote:
I'm looking for a utility to be used for a disk-to-disk backup that recurses through a directory and does the following for each file and subdirectory:
1. If the backup file exists but the original doesn't, the backup is deleted.
2. If the backup file exists and is as new as the original, no copying is done.
3. If the backup file does not exist, the original is copied to the backup.
I could certainly write a shell script using "find" for doing this, but I wonder if there's some handy-dandy utility around that already does it for me.
man rsync.
You can use it over ssh, too.
rsync -e ssh
rsync -e 'ssh -l someuser' ....
--
Carpe diem - Seize the day.
Carp in denim - There's a fish in my pants!
Jon Nelson
On Wednesday 15 December 2004 06:11 pm, Jon Nelson wrote:
On Wed, 15 Dec 2004, Paul W. Abrahams wrote:
I'm looking for a utility to be used for a disk-to-disk backup that recurses through a directory and does the following for each file and subdirectory:
1. If the backup file exists but the original doesn't, the backup is deleted.
2. If the backup file exists and is as new as the original, no copying is done.
3. If the backup file does not exist, the original is copied to the backup.
I could certainly write a shell script using "find" for doing this, but I wonder if there's some handy-dandy utility around that already does it for me.
man rsync.
You can use it over ssh, too. rsync -e ssh rsync -e 'ssh -l someuser' ....
More specifically, I use: rsync -auvzr -e ssh /path/to/local/dir/ <name of remote machn>:/path/to/backup/dir/ and back up directories nightly. Note the trailing slashes on the above paths. They are important.
On Wed, 2004-12-15 at 18:20, Bruce Marshall wrote:
On Wednesday 15 December 2004 06:11 pm, Jon Nelson wrote:
On Wed, 15 Dec 2004, Paul W. Abrahams wrote:
man rsync.
You can use it over ssh, too. rsync -e ssh rsync -e 'ssh -l someuser' ....
More specifically, I use:
rsync -auvzr -e ssh /path/to/local/dir/ <name of remote machn>:/path/to/backup/dir/
and back up directories nightly.
Note the trailing slashes on the above paths. They are important.
Before I retired I was using rsync to backup many gig of data across a wan connection and also used the compression switch. Very good program. Also the trailing "/" is only needed on the source not the destination. -- Ken Schneider UNIX since 1989 SuSE since 1998 * Only reply to the list please*
On Wednesday 15 December 2004 6:26 pm, Ken Schneider wrote:
Before I retired I was using rsync to backup many gig of data across a wan connection and also used the compression switch. Very good program. Also the trailing "/" is only needed on the source not the destination.
Thanks to all for the pointer to rsync. But --- I tried using it to do what seems like a simple case: backing up from one mounted directory to another (actually, the directories are on different hard drives). But I got the following: pwa@suillus:~> rsync -a --delete-after /home/pwa /home-backup/pwa rsync: writefd_unbuffered failed to write 4 bytes: phase "unknown": Broken pipe rsync error: error in rsync protocol data stream (code 12) at io.c(515) rsync: writefd_unbuffered failed to write 69 bytes: phase "unknown": Broken pipe rsync error: error in rsync protocol data stream (code 12) at io.c(515) pwa@suillus:~> What now, good people? Paul
On Wed, 15 Dec 2004 18:50:44 -0500, Paul W. Abrahams
On Wednesday 15 December 2004 6:26 pm, Ken Schneider wrote:
Before I retired I was using rsync to backup many gig of data across a wan connection and also used the compression switch. Very good program. Also the trailing "/" is only needed on the source not the destination.
Thanks to all for the pointer to rsync. But ---
I tried using it to do what seems like a simple case: backing up from one mounted directory to another (actually, the directories are on different hard drives). But I got the following:
pwa@suillus:~> rsync -a --delete-after /home/pwa /home-backup/pwa rsync: writefd_unbuffered failed to write 4 bytes: phase "unknown": Broken pipe rsync error: error in rsync protocol data stream (code 12) at io.c(515) rsync: writefd_unbuffered failed to write 69 bytes: phase "unknown": Broken pipe rsync error: error in rsync protocol data stream (code 12) at io.c(515) pwa@suillus:~>
What now, good people?
Paul
-- Check the headers for your unsubscription address For additional commands send e-mail to suse-linux-e-help@suse.com Also check the archives at http://lists.suse.com Please read the FAQs: suse-linux-e-faq@suse.com
I don't use rsync, but the other guys said that the trailing slash at the end of the orig. directory is mandatory. And your command does not have one. Cheers Sunny -- Get Firefox http://www.spreadfirefox.com/?q=affiliates&id=10745&t=85
On Wed, 2004-12-15 at 19:07, Sunny wrote:
On Wed, 15 Dec 2004 18:50:44 -0500, Paul W. Abrahams
wrote: On Wednesday 15 December 2004 6:26 pm, Ken Schneider wrote:
Before I retired I was using rsync to backup many gig of data across a wan connection and also used the compression switch. Very good program. Also the trailing "/" is only needed on the source not the destination.
Thanks to all for the pointer to rsync. But ---
I tried using it to do what seems like a simple case: backing up from one mounted directory to another (actually, the directories are on different hard drives). But I got the following:
pwa@suillus:~> rsync -a --delete-after /home/pwa /home-backup/pwa rsync: writefd_unbuffered failed to write 4 bytes: phase "unknown": Broken pipe rsync error: error in rsync protocol data stream (code 12) at io.c(515) rsync: writefd_unbuffered failed to write 69 bytes: phase "unknown": Broken pipe rsync error: error in rsync protocol data stream (code 12) at io.c(515) pwa@suillus:~>
What now, good people?
Paul
I don't use rsync, but the other guys said that the trailing slash at the end of the orig. directory is mandatory. And your command does not have one.
The trailing slash in the source is used to tell rsync to do recursive copy which you do not have. Also you will need to create the /home-backup/pwa destination first. -- Ken Schneider UNIX since 1989 SuSE since 1998 * Only reply to the list please*
On Wednesday 15 December 2004 15:26, Ken Schneider wrote:
On Wed, 2004-12-15 at 18:20, Bruce Marshall wrote:
On Wednesday 15 December 2004 06:11 pm, Jon Nelson wrote:
On Wed, 15 Dec 2004, Paul W. Abrahams wrote:
man rsync.
You can use it over ssh, too. rsync -e ssh rsync -e 'ssh -l someuser' ....
More specifically, I use:
rsync -auvzr -e ssh /path/to/local/dir/ <name of remote machn>:/path/to/backup/dir/
and back up directories nightly.
Note the trailing slashes on the above paths. They are important.
Before I retired I was using rsync to backup many gig of data across a wan connection and also used the compression switch. Very good program. Also the trailing "/" is only needed on the source not the destination.
-- Ken Schneider UNIX since 1989 SuSE since 1998 * Only reply to the list please*
Which is better for backup dump or rsync? Jerome
Susemail wrote:
On Wednesday 15 December 2004 15:26, Ken Schneider wrote:
On Wed, 2004-12-15 at 18:20, Bruce Marshall wrote:
On Wednesday 15 December 2004 06:11 pm, Jon Nelson wrote:
On Wed, 15 Dec 2004, Paul W. Abrahams wrote:
man rsync.
You can use it over ssh, too. rsync -e ssh rsync -e 'ssh -l someuser' ....
More specifically, I use:
rsync -auvzr -e ssh /path/to/local/dir/ <name of remote machn>:/path/to/backup/dir/
and back up directories nightly.
Note the trailing slashes on the above paths. They are important.
Before I retired I was using rsync to backup many gig of data across a wan connection and also used the compression switch. Very good program. Also the trailing "/" is only needed on the source not the destination.
-- Ken Schneider UNIX since 1989 SuSE since 1998 * Only reply to the list please*
Which is better for backup dump or rsync? Jerome
From the man page, may be it does ext3 as well:- dump - ext2 filesystem backup If you use dump on reiserfs, you'll see some strange happenings, guaranteed, rsync and unison are filesystem agnostic and are easy to use. Regards Sid. -- Sid Boyce .... Hamradio G3VBV and keen Flyer =====LINUX ONLY USED HERE=====
Susemail wrote: /
Before I retired I was using rsync to backup many gig of data across a wan connection and also used the compression switch. Very good program. Also the trailing "/" is only needed on the source not the destination.
-- Ken Schneider UNIX since 1989 SuSE since 1998 * Only reply to the list please*
Which is better for backup dump or rsync? Jerome
From the man page, may be it does ext3 as well:- dump - ext2 filesystem backup If you use dump on reiserfs, you'll see some strange happenings, guaranteed, rsync and unison are filesystem agnostic and are easy to use. Regards Sid. -- Sid Boyce .... Hamradio G3VBV and keen Flyer =====LINUX ONLY USED HERE===== I was curious because an article in Sys Admin suggested dump is a great backup
On Sunday 19 December 2004 02:53, Sid Boyce wrote: program, it never mentioned the filesystem limitations.
Susemail wrote:
On Sunday 19 December 2004 02:53, Sid Boyce wrote:
Susemail wrote:
/
Before I retired I was using rsync to backup many gig of data across a wan connection and also used the compression switch. Very good program. Also the trailing "/" is only needed on the source not the destination.
-- Ken Schneider UNIX since 1989 SuSE since 1998 * Only reply to the list please*
Which is better for backup dump or rsync? Jerome
From the man page, may be it does ext3 as well:- dump - ext2 filesystem backup If you use dump on reiserfs, you'll see some strange happenings, guaranteed, rsync and unison are filesystem agnostic and are easy to use. Regards Sid. -- Sid Boyce .... Hamradio G3VBV and keen Flyer =====LINUX ONLY USED HERE=====
I was curious because an article in Sys Admin suggested dump is a great backup program, it never mentioned the filesystem limitations.
The first time I tried it was after reading an article. I couldn't
understand at first what was happening, then I noticed a message about
ext2 which prompted me to look at the manpage and worry that I may have
corrupted my filesystem, but it was OK, so I looked around and found
unison. You'll notice that dump is not installed on your system if you
have reiserfs filesystems only. I have a script (RSYNC) that I use on
all my 5 boxes for backup/restore and when I'm upgrading disks, after
fdisk, mkswap, mkreiserfs on the new HD, I rsync the current disk across
to the new one, then swap them over.
# o /usr/local/mybin/RSYNC
####rsync -avz --rsh=ssh <local directory>
On Sat, 18 Dec 2004 18:52:07 -0800, you wrote:
On Wednesday 15 December 2004 15:26, Ken Schneider wrote:
On Wed, 2004-12-15 at 18:20, Bruce Marshall wrote:
On Wednesday 15 December 2004 06:11 pm, Jon Nelson wrote:
On Wed, 15 Dec 2004, Paul W. Abrahams wrote:
man rsync.
You can use it over ssh, too. rsync -e ssh rsync -e 'ssh -l someuser' ....
More specifically, I use:
rsync -auvzr -e ssh /path/to/local/dir/ <name of remote machn>:/path/to/backup/dir/
and back up directories nightly.
Note the trailing slashes on the above paths. They are important.
Before I retired I was using rsync to backup many gig of data across a wan connection and also used the compression switch. Very good program. Also the trailing "/" is only needed on the source not the destination.
-- Ken Schneider UNIX since 1989 SuSE since 1998 * Only reply to the list please*
Which is better for backup dump or rsync?
Rsync. Dump has been stagnent and unsupported for eons, and it was cranky to use even when it was still in development. Mike- -- If you can keep your head while those around you are losing theirs... You may have a great career as a network administrator ahead! -- Please note - Due to the intense volume of spam, we have installed site-wide spam filters at catherders.com. If email from you bounces, try non-HTML, non-encoded, non-attachments,
"Backup utility: does it exist?"
Um.... dude... this is the open-source world.... it's not "does it exist" it's "where do I download it?" Generally... it exists.... or something very close to it. What doesn't exist in Linux: Halo The "Visual" part of Visual Studio .NET (Almost out though!) Basically everything else, or it's equivelent, IS THERE. FOR FREE. HIGH/HIGHER QUALITY. That's why I switched to Linux. Cheerio, ES -- Registered Linux user #: 366,862 Registered Linux computer #: 261,856 RedHat Linux Fedora 2 Give me three reasons to use Microsoft software. Chances are the only feasable ones are: I'm ignorant and don't realize that the open-source community has proven that open-source WORKS, I think open-source software is vastly infererior....... I obviously have never used an open-source program developed by a large community. I like Halo....... okay, good point. I like the Visual part of Visual Studio..... Yes, it can be handy at times. But an experience .NET developer would have no problem switch to Linux in a jiffy, losing little or no productivity. (www.mono-project.com)
Paul W. Abrahams wrote:
I'm looking for a utility to be used for a disk-to-disk backup that recurses through a directory and does the following for each file and subdirectory:
1. If the backup file exists but the original doesn't, the backup is deleted.
2. If the backup file exists and is as new as the original, no copying is done.
3. If the backup file does not exist, the original is copied to the backup.
I could certainly write a shell script using "find" for doing this, but I wonder if there's some handy-dandy utility around that already does it for me.
I know everybody's been recommending rsync, but I might suggest that you also look at unison (http://www.cis.upenn.edu/~bcpierce/unison/index.html - it's also included with SuSE). I've found it very easy to use. Steve
Steve wrote:
Paul W. Abrahams wrote:
I'm looking for a utility to be used for a disk-to-disk backup that recurses through a directory and does the following for each file and subdirectory:
1. If the backup file exists but the original doesn't, the backup is deleted.
2. If the backup file exists and is as new as the original, no copying is done.
3. If the backup file does not exist, the original is copied to the backup.
I could certainly write a shell script using "find" for doing this, but I wonder if there's some handy-dandy utility around that already does it for me.
I know everybody's been recommending rsync, but I might suggest that you also look at unison (http://www.cis.upenn.edu/~bcpierce/unison/index.html - it's also included with SuSE). I've found it very easy to use.
Steve
Going back years -- I used unison, then it developed a problem around about a glibc change. I started using rsync when it first came out and have used it ever since, though unison seems to have been back on track. I'd have no qualms using either, they're both excellent. Some suggestions were to use tools that depended on dump, bad choice as dump has been stagnant for years, it only does ext2 and may be ext3. As the guy says, it's a matter of which tools and where to download them from. Some pundits write that the weakness of Linux is that there are too many applications that do the same thing, which leads to confusion, but I say only the confused get further confused, you can successfully give verbal directions to a blind person, you stand no chance with a confused person. Regards Sid. -- Sid Boyce .... Hamradio G3VBV and keen Flyer =====LINUX ONLY USED HERE=====
Steve, On Wednesday 15 December 2004 22:22, Steve wrote:
...
I know everybody's been recommending rsync, but I might suggest that you also look at unison
So I did. One of the first things I noticed is that the most recent date in the BUGS.txt file ("/usr/share/doc/packages/unison/BUGS.txt") is 2002.
Here, the "(non-) development" page (http://www.cis.upenn.edu/~bcpierce/unison/status.html) confirms the fact that this software is no longer being developed. That's too bad. The software may be fine, as long as the known (or unknown) bugs don't affect you, but users should probably be aware of the fact. On the other hand, it appears the code is not being completely ignored, bug reports and patches are being accepted. Unfortunately, there are no dates on the list of bugs reported and patches applied to the current beta release, so it's impossible to gauge the level of activity. One clue is the modification time on the "Change log for beta-release version" page (<Change log for beta-release version>), which is 11/01/2004 08:25:45 AM.
- it's also included with SuSE). I've found it very easy to use.
Steve
Randall Schulz
mondorescue.org is also excellent! B-) On Monday 20 December 2004 09:15 am, Randall R Schulz wrote:
Steve,
On Wednesday 15 December 2004 22:22, Steve wrote:
...
I know everybody's been recommending rsync, but I might suggest that you also look at unison
So I did. One of the first things I noticed is that the most recent date in the BUGS.txt file ("/usr/share/doc/packages/unison/BUGS.txt") is 2002.
Here, the "(non-) development" page (http://www.cis.upenn.edu/~bcpierce/unison/status.html) confirms the fact that this software is no longer being developed.
That's too bad. The software may be fine, as long as the known (or unknown) bugs don't affect you, but users should probably be aware of the fact.
On the other hand, it appears the code is not being completely ignored, bug reports and patches are being accepted. Unfortunately, there are no dates on the list of bugs reported and patches applied to the current beta release, so it's impossible to gauge the level of activity. One clue is the modification time on the "Change log for beta-release version" page (<Change log for beta-release version>), which is 11/01/2004 08:25:45 AM.
- it's also included with SuSE). I've found it very easy to use.
Steve
Randall Schulz
On Mon, 20 Dec 2004 09:35:34 -0700, you wrote:
mondorescue.org is also excellent!
I have to disagree. It executes very slowly and the arbitrary limit of 50 backup media shows a poorly thought out design, IMHO. If you're trying to do an offline backup (which isn't what the original poster was asking about) look into Dar. Mike- -- If you can keep your head while those around you are losing theirs... You may have a great career as a network administrator ahead! -- Please note - Due to the intense volume of spam, we have installed site-wide spam filters at catherders.com. If email from you bounces, try non-HTML, non-encoded, non-attachments,
I don't see a problem with the 50 limit (if there is one) if it takes over 50 floppies, use CD if it takes over 50 CD's, use DVD if it takes over 50 DVD's, use tape if it takes over 50 tapes, well...... Didn't see the OP. (one of the problems with trimming) B=) On Monday 20 December 2004 09:46 am, Michael W Cocke wrote:
On Mon, 20 Dec 2004 09:35:34 -0700, you wrote:
mondorescue.org is also excellent!
I have to disagree. It executes very slowly and the arbitrary limit of 50 backup media shows a poorly thought out design, IMHO. If you're trying to do an offline backup (which isn't what the original poster was asking about) look into Dar.
Mike-
-- If you can keep your head while those around you are losing theirs... You may have a great career as a network administrator ahead! -- Please note - Due to the intense volume of spam, we have installed site-wide spam filters at catherders.com. If email from you bounces, try non-HTML, non-encoded, non-attachments,
On Mon, 20 Dec 2004 09:57:09 -0700, you wrote:
I don't see a problem with the 50 limit (if there is one)
if it takes over 50 floppies, use CD if it takes over 50 CD's, use DVD if it takes over 50 DVD's, use tape if it takes over 50 tapes, well......
Please stop top posting. As it happens, my DAT drive used 4 Gb tapes, so I used DVDs (4.7 Gb). Took a week (Dar takes 3 days using DARomizer) to fill 50 of them, after which - bang. Wasted a week, still had no backup. I repeat - any backup system which has a limit on the number of media you can use is a poorly thought out design. Mike- -- If you can keep your head while those around you are losing theirs... You may have a great career as a network administrator ahead! -- Please note - Due to the intense volume of spam, we have installed site-wide spam filters at catherders.com. If email from you bounces, try non-HTML, non-encoded, non-attachments,
OP: take it for whatever its worth Mike: Ignore this! B-) On Monday 20 December 2004 10:21 am, Michael W Cocke wrote:
On Mon, 20 Dec 2004 09:57:09 -0700, you wrote:
I don't see a problem with the 50 limit (if there is one)
if it takes over 50 floppies, use CD if it takes over 50 CD's, use DVD if it takes over 50 DVD's, use tape if it takes over 50 tapes, well......
Please stop top posting.
As it happens, my DAT drive used 4 Gb tapes, so I used DVDs (4.7 Gb). Took a week (Dar takes 3 days using DARomizer) to fill 50 of them, after which - bang. Wasted a week, still had no backup.
I repeat - any backup system which has a limit on the number of media you can use is a poorly thought out design.
Mike-
-- If you can keep your head while those around you are losing theirs... You may have a great career as a network administrator ahead! -- Please note - Due to the intense volume of spam, we have installed site-wide spam filters at catherders.com. If email from you bounces, try non-HTML, non-encoded, non-attachments,
On Monday 20 December 2004 11:46 am, Michael W Cocke wrote:
I have to disagree. It executes very slowly and the arbitrary limit of 50 backup media shows a poorly thought out design, IMHO. If you're trying to do an offline backup (which isn't what the original poster was asking about) look into Dar.
Mike-
I would agree with this and also to use daromizer. I do my backups to DVD+rw and it works quite painlessly.
The point of mondo rescue is disaster recovery. If you use rsync or other backups to copy data, and the entire server goes down, then you have to re-build the server to the point where you can restore from the rsync drive or whateve other backup device (tape) etc. you do you incrementles with. I use mondo for production. I distribute machines, so when I have the same hardware and software (server) setup I want to duplicate, I build the server the way I want it and do a mondo rescue to make it easy to duplicate from then on. Would work the same way for disaster recovery. If you have a server that works with a large amount of data, you would want to use mondo to backup the server (configuration) and whatever incremental backup to do the data. Mondo is especially usefull for dual-boot machines. It will restore the windows, linux, boot-loader, and all from bootable media. (CD or DVD) If I'm not mistaken, it would also create a boot CD for restoring from a tape drive. B-) On Monday 20 December 2004 11:01 am, Bruce Marshall wrote:
On Monday 20 December 2004 11:46 am, Michael W Cocke wrote:
I have to disagree. It executes very slowly and the arbitrary limit of 50 backup media shows a poorly thought out design, IMHO. If you're trying to do an offline backup (which isn't what the original poster was asking about) look into Dar.
Mike-
I would agree with this and also to use daromizer. I do my backups to DVD+rw and it works quite painlessly.
On Mon, 20 Dec 2004 13:00:50 -0700, you wrote:
The point of mondo rescue is disaster recovery.
If you use rsync or other backups to copy data, and the entire server goes down, then you have to re-build the server to the point where you can restore from the rsync drive or whateve other backup device (tape) etc. you do you incrementles with.
I use mondo for production. I distribute machines, so when I have the same hardware and software (server) setup I want to duplicate, I build the server the way I want it and do a mondo rescue to make it easy to duplicate from then on. Would work the same way for disaster recovery.
[reposted, because I forgot to change the reply-to] [snip] And what do you do when you cannot replace a system with the exact same configuration? In my experience, you need to reinstall linux and then restore the data... Bare metal restore is a nice theory, but it has a bad record of colliding with reality. I have yet to see one work - including Mondo, which I did try to use until it failed to function. Mike- -- If you can keep your head while those around you are losing theirs... You may have a great career as a network administrator ahead! -- Please note - Due to the intense volume of spam, we have installed site-wide spam filters at catherders.com. If email from you bounces, try non-HTML, non-encoded, non-attachments,
Yes, the other mike (the one that makes mondo work with SuSE) was mentioning that you have tried before. I'm posting his response to the list. Bare metal restore is good for data loss corruption, it may not work if the hardware itself dies. I would think it close enough in most cases to at least get you back up and running to a point where you could accommodate the new hardware and create another backup. B-) On Monday 20 December 2004 01:34 pm, Michael W Cocke wrote:
On Mon, 20 Dec 2004 13:00:50 -0700, you wrote:
The point of mondo rescue is disaster recovery.
If you use rsync or other backups to copy data, and the entire server goes down, then you have to re-build the server to the point where you can restore from the rsync drive or whateve other backup device (tape) etc. you do you incrementles with.
I use mondo for production. I distribute machines, so when I have the same hardware and software (server) setup I want to duplicate, I build the server the way I want it and do a mondo rescue to make it easy to duplicate from then on. Would work the same way for disaster recovery.
[reposted, because I forgot to change the reply-to]
[snip]
And what do you do when you cannot replace a system with the exact same configuration? In my experience, you need to reinstall linux and then restore the data... Bare metal restore is a nice theory, but it has a bad record of colliding with reality. I have yet to see one work - including Mondo, which I did try to use until it failed to function.
Mike-
-- If you can keep your head while those around you are losing theirs... You may have a great career as a network administrator ahead! -- Please note - Due to the intense volume of spam, we have installed site-wide spam filters at catherders.com. If email from you bounces, try non-HTML, non-encoded, non-attachments,
Here is Mike's response And he is sure that the limit of 50 is hard coded into Mondo? That's the question I'm asking. Then assuming that he knows it's hard coded into Mondo, and knows where the code is, why not change it, and compile it. Perhaps and I'm only guessing, that he ran out of memory, or something other than that that causes the drop at 50. If he is actually doing a backup of a system that requires more than 50 DVD's which by my simple calculations would be over 250G compressed, he should really be looking for a commercial backup system. Something along the of a DLT type drive with a auto-tape loader. Heck, we just got an email server with 4 146gig HD's in it. No, not for linux unfortunately. For backup? another computer with 4 250gig SATA drives in it. FWIW, this isn't the first time he has brought this up on the SUSE list. And if it's so bad, how come I get regular download of All the versions I support of SUSE. I get regular questions frequently and do my best to answer them. Had he asked me, I might have been able to find the actual problem. Instead he just bitches on the suse list. Mike B=) On Monday 20 December 2004 01:34 pm, Michael W Cocke wrote:
On Mon, 20 Dec 2004 13:00:50 -0700, you wrote:
The point of mondo rescue is disaster recovery.
If you use rsync or other backups to copy data, and the entire server goes down, then you have to re-build the server to the point where you can restore from the rsync drive or whateve other backup device (tape) etc. you do you incrementles with.
I use mondo for production. I distribute machines, so when I have the same hardware and software (server) setup I want to duplicate, I build the server the way I want it and do a mondo rescue to make it easy to duplicate from then on. Would work the same way for disaster recovery.
[reposted, because I forgot to change the reply-to]
[snip]
And what do you do when you cannot replace a system with the exact same configuration? In my experience, you need to reinstall linux and then restore the data... Bare metal restore is a nice theory, but it has a bad record of colliding with reality. I have yet to see one work - including Mondo, which I did try to use until it failed to function.
Mike-
-- If you can keep your head while those around you are losing theirs... You may have a great career as a network administrator ahead! -- Please note - Due to the intense volume of spam, we have installed site-wide spam filters at catherders.com. If email from you bounces, try non-HTML, non-encoded, non-attachments,
[forgot to change the blasted reply-to again! Sorry] On Mon, 20 Dec 2004 13:46:05 -0700, you wrote:
Here is Mike's response
And he is sure that the limit of 50 is hard coded into Mondo? That's the question I'm asking. Then assuming that he knows it's hard coded into Mondo, and knows where the code is, why not change it, and compile it. Perhaps and I'm only guessing, that he ran out of memory, or something other than that that causes the drop at 50.
I'd have to hunt it up, but it was in the Mondo docs somewhere. There's something - and the docs didn't specify exactly what, just mentioned 'something' that would need to be changed and recompiled to exceed 50 media.
If he is actually doing a backup of a system that requires more than 50 DVD's which by my simple calculations would be over 250G compressed, he should really be looking for a commercial backup system. Something along the of a DLT type drive with a auto-tape loader. Heck, we just got an email server with 4 146gig HD's in it. No, not for linux unfortunately. For backup? another computer with 4 250gig SATA drives in it.
<grin> How about a pair of 2.2 Terabyte servers? This is for my home system, by the way... (yes, really!). Having priced DLT drives, I can tell you it's just not gonna happen. I do a full backup of one server to 200 DVD-RWs, every other month. Takes around ten days. I only have to backup 1 server to DVDs, because the 2 servers mirror each other - using rsync over a gigabit LAN - every 12 hours.
FWIW, this isn't the first time he has brought this up on the SUSE list. And if it's so bad, how come I get regular download of All the versions I support of SUSE. I get regular questions frequently and do my best to answer them. Had he asked me, I might have been able to find the actual problem. Instead he just bitches on the suse list.
I just wanted the OP to have all the facts... (sound familiar?). FWIW, I did communicate back when I was trying to use Mondo & Mindi. You were just as polite and helpful then as you have been in this thread. If you don't want to hear about problems in your design, don't write badly designed code. Mike- -- If you can keep your head while those around you are losing theirs... You may have a great career as a network administrator ahead! -- Please note - Due to the intense volume of spam, we have installed site-wide spam filters at catherders.com. If email from you bounces, try non-HTML, non-encoded, non-attachments,
On Monday 20 December 2004 16:05, Michael W Cocke wrote:
[forgot to change the blasted reply-to again! Sorry]
On Mon, 20 Dec 2004 13:46:05 -0700, you wrote:
Here is Mike's response
<grin> How about a pair of 2.2 Terabyte servers? This is for my home system, by the way... (yes, really!).
None of my business I know. Just curious. What in the world do you do with a home system like that??! Jerome
On Tuesday 21 December 2004 01:05, Michael W Cocke wrote:
I just wanted the OP to have all the facts... (sound familiar?). FWIW, I did communicate back when I was trying to use Mondo & Mindi. You were just as polite and helpful then as you have been in this thread. If you don't want to hear about problems in your design, don't write badly designed code.
you seem to think that wrote the code. Let's get the first thing straight. I didn't write the code. Just that simple. I help folks when I can. Perhaps you spoke to Hugo, the author about your 50 limit. I don't know. But to just up and say the code is badly designed is flat wrong. It doesn't fit YOUR particular setup. Happens sometimes. A pair of 2.2 tB servers, and you want to back up to DVD's? Ok. That's your choice. Can you write a better program? If so, point me to it, and I'll test it too. If you can't, simply point out to folks that it won't do over 50 DVD's and let it drop. No need to batter the program. Mike -- Powered by SuSE 9.2 Kernel 2.6.8 KDE 3.3.0 Kmail 1.7.1 For Mondo/Mindi backup support go to http://www.mikenjane.net/~mike 4:24pm up 2 days 5:52, 4 users, load average: 2.16, 2.41, 2.66
On Tue, 21 Dec 2004 16:30:52 +0100, you wrote:
On Tuesday 21 December 2004 01:05, Michael W Cocke wrote:
I just wanted the OP to have all the facts... (sound familiar?). FWIW, I did communicate back when I was trying to use Mondo & Mindi. You were just as polite and helpful then as you have been in this thread. If you don't want to hear about problems in your design, don't write badly designed code.
you seem to think that wrote the code. Let's get the first thing straight. I didn't write the code. Just that simple. I help folks when I can. Perhaps you spoke to Hugo, the author about your 50 limit. I
I wish you'd use the same from address consistantly. This is the first message I've seen in this thread from Mikenjane. I don't really care if you call yourself Bjorn or Mikenjane, but be consistant. I didn't have to write a better program - someone else has. And you'll notice that I didn't get nasty until after Bjorn (or you, whoever) did, so lose the attitude. If you prefer Mondo, fine, but don't try to sell a cheesy design as user error. A backup program that bangs out after a certain number of media is clearly a bad design. I wasn't the one who decided to try to sell Mondo as a viable backup utility in the face of the evidence. Mike- -- If you can keep your head while those around you are losing theirs... You may have a great career as a network administrator ahead! -- Please note - Due to the intense volume of spam, we have installed site-wide spam filters at catherders.com. If email from you bounces, try non-HTML, non-encoded, non-attachments,
Now come on. I recommended, and still do, mondorescue. to which you replied it was flawed from the very design. The point that is being made (with or without your acknowledgement) is that it WAS NOT DESIGNED TO BACKUP 2TB. It was designed to backup a working system to bootable - fully automated CD for disaster recovery. Not designed to backup data necessarily. Not only do you dig mondo everytime you mention it. You dig it everytime someone else does. Before you slam it again, You might want to read what Hugo himself (author) has to say about you and the subject! <quote from hugo> Fran Fabrizio wrote:-
In case of server failure, use Mondo to quickly get it back into the system state it was, and then use your daily backup solution to restore the latest user data. A very effective 1-2 punch.
Exactly. :) That's what Mondo is for. There are some jobs for which Mondo simply isn't designed. It isn't designed to give your spouse physical pleasure, walk your dog, teach you a foreign language, or backup 200GB of data. If you insist on backing up huge amounts of data with bzip2 and afio (which is basically what Mondo does) then of course it's slow. :) Some people just don't have the sense God gave a lemon. I've had dealings with Michael Cocke before. He's a nice guy. When he said, "I agree with this," I suspect he was talking about Bruce Marshall's recommendation of Dar. He's not a jerk. I can't comment on Bruce, although I'm surprised that someone with so many fingers in so many pies doesn't grasp the fact that bzip2 is a *slow* data compression algorithm... -Hugo B=) On Tuesday 21 December 2004 09:49 am, Michael W Cocke wrote:
On Tue, 21 Dec 2004 16:30:52 +0100, you wrote:
On Tuesday 21 December 2004 01:05, Michael W Cocke wrote:
I just wanted the OP to have all the facts... (sound familiar?). FWIW, I did communicate back when I was trying to use Mondo & Mindi. You were just as polite and helpful then as you have been in this thread. If you don't want to hear about problems in your design, don't write badly designed code.
you seem to think that wrote the code. Let's get the first thing straight. I didn't write the code. Just that simple. I help folks when I can. Perhaps you spoke to Hugo, the author about your 50 limit. I
I wish you'd use the same from address consistantly. This is the first message I've seen in this thread from Mikenjane. I don't really care if you call yourself Bjorn or Mikenjane, but be consistant.
I didn't have to write a better program - someone else has. And you'll notice that I didn't get nasty until after Bjorn (or you, whoever) did, so lose the attitude. If you prefer Mondo, fine, but don't try to sell a cheesy design as user error. A backup program that bangs out after a certain number of media is clearly a bad design. I wasn't the one who decided to try to sell Mondo as a viable backup utility in the face of the evidence.
Mike-
-- If you can keep your head while those around you are losing theirs... You may have a great career as a network administrator ahead! -- Please note - Due to the intense volume of spam, we have installed site-wide spam filters at catherders.com. If email from you bounces, try non-HTML, non-encoded, non-attachments,
On Tuesday 21 December 2004 12:03 pm, Brad Bourn wrote:
I've had dealings with Michael Cocke before. He's a nice guy. When he said, "I agree with this," I suspect he was talking about Bruce Marshall's recommendation of Dar. He's not a jerk. I can't comment on Bruce, although I'm surprised that someone with so many fingers in so many pies doesn't grasp the fact that bzip2 is a *slow* data compression algorithm...
Eh?? I'm not going to take this personally but I think you're a little mixed up. I have fingers in many pies? I don't think so. And I have said nothing about bzip2... I commented about DAR and DARomizer as being what I use and that it works for me. And Mike was the one to suggest DAR and I agreed which his suggestion. Just to clear the air here.... I think you have it all bass-ackwards.
On Tuesday 21 December 2004 17:49, Michael W Cocke wrote:
On Tue, 21 Dec 2004 16:30:52 +0100, you wrote:
On Tuesday 21 December 2004 01:05, Michael W Cocke wrote:
I just wanted the OP to have all the facts... (sound familiar?). FWIW, I did communicate back when I was trying to use Mondo & Mindi. You were just as polite and helpful then as you have been in this thread. If you don't want to hear about problems in your design, don't write badly designed code.
you seem to think that wrote the code. Let's get the first thing straight. I didn't write the code. Just that simple. I help folks when I can. Perhaps you spoke to Hugo, the author about your 50 limit. I
I wish you'd use the same from address consistantly. This is the first message I've seen in this thread from Mikenjane. I don't really care if you call yourself Bjorn or Mikenjane, but be consistant.
Did you consider that two people answered you? Can't you tell the difference? Brad has also answered the message. Now I'm answering it. Get it right.
I didn't have to write a better program - someone else has.
But can you? That's the question? As I said before, write it, and I'll test it. I made it plain that I didn't write the program. You assumed that I did. Your assumption was wrong.
And you'll notice that I didn't get nasty until after Bjorn (or you, whoever) did, so lose the attitude. If you prefer Mondo, fine, but don't try to sell a cheesy design as user error. A backup program that bangs out after a certain number of media is clearly a bad design. I wasn't the one who decided to try to sell Mondo as a viable backup utility in the face of the evidence.
No you weren't. We will all admit it. And it doesn't work on YOUR system. Now that we have clarified the fact that you are probably one of the few users on this list running two 2.2TB system, we know that it won't work on your system. The next time you say something about Mondo, I'll know where it came from and just ignore it. It's the least that I can do for you. I wouldn't want you to get upset. Mike -- Powered by SuSE 9.2 Kernel 2.6.8 KDE 3.3.0 Kmail 1.7.1 For Mondo/Mindi backup support go to http://www.mikenjane.net/~mike 7:42pm up 2 days 9:09, 4 users, load average: 2.26, 2.73, 3.17
No offense intended towards *any* participant here (including myself
earlier), but can this please be taken off-list now?
--
Carpe diem - Seize the day.
Carp in denim - There's a fish in my pants!
Jon Nelson
I want to say that I'm not knocking Dar or any other solution... From my experience mondorescue was a nice fit for what I needed. Wanted the OP to have all the options. Since some other issues came up about mondo, I thought I'd give a chance for someone that knows more that me (of which there are allot) to respond. B-) On Monday 20 December 2004 01:34 pm, Michael W Cocke wrote:
On Mon, 20 Dec 2004 13:00:50 -0700, you wrote:
The point of mondo rescue is disaster recovery.
If you use rsync or other backups to copy data, and the entire server goes down, then you have to re-build the server to the point where you can restore from the rsync drive or whateve other backup device (tape) etc. you do you incrementles with.
I use mondo for production. I distribute machines, so when I have the same hardware and software (server) setup I want to duplicate, I build the server the way I want it and do a mondo rescue to make it easy to duplicate from then on. Would work the same way for disaster recovery.
[reposted, because I forgot to change the reply-to]
[snip]
And what do you do when you cannot replace a system with the exact same configuration? In my experience, you need to reinstall linux and then restore the data... Bare metal restore is a nice theory, but it has a bad record of colliding with reality. I have yet to see one work - including Mondo, which I did try to use until it failed to function.
Mike-
-- If you can keep your head while those around you are losing theirs... You may have a great career as a network administrator ahead! -- Please note - Due to the intense volume of spam, we have installed site-wide spam filters at catherders.com. If email from you bounces, try non-HTML, non-encoded, non-attachments,
I have just powered up a new d-link 4-Port KVM Switch DKVM-4K with three computers. The WinXp desktop is set to 32-bit colour 1152 by 864 pixels and the monitor screen is adjusted accurately for width and height etc. When I rotate the switch to SuSE 9.1 KDE the desktop is smaller and offset to the left some. KDE settings are also 1152 by 864 and 76Hz It doesn't make much difference between 76 and 68Hz so I left it at 76Hz. I'm sure there is a reason for this I just haven't a clue. The third screen is a Novell server which is character based and no problem. Any ideas on how I can get the KDE screen larger on the same monitor as WinXP?? TIA Mike
On Tue, 2005-01-11 at 23:09 +1100, Mike Dewhirst wrote:
I have just powered up a new d-link 4-Port KVM Switch DKVM-4K with three computers.
The WinXp desktop is set to 32-bit colour 1152 by 864 pixels and the monitor screen is adjusted accurately for width and height etc.
When I rotate the switch to SuSE 9.1 KDE the desktop is smaller and offset to the left some.
KDE settings are also 1152 by 864 and 76Hz
It doesn't make much difference between 76 and 68Hz so I left it at 76Hz.
I'm sure there is a reason for this I just haven't a clue.
The third screen is a Novell server which is character based and no problem.
Any ideas on how I can get the KDE screen larger on the same monitor as WinXP??
Your monitor "remembers" the various screen adjustments you make per resolution and frequency combination. Meaning that it will remember how you set the horizontal size and position for a screen of 1152x864x76 and 1152x864x75. So, you're running two different settings, and the monitor adjusts to each differently. Most monitors will remember a dozen or so settings, so you really only need to make your adjustments to the Linux screen to make it look like the Windows one, and you'll be all set. If I'm completely off-base here, and have no idea what I'm talking about, you're left with getting Windows and Linux to do the same resolution and refresh rate. The latter is tricky. You can set the refresh rate in Windows by going to the "Advanced" tab, and picking something like 75 Hz. When you setup xorg, you don't have the option, at least, not when you use something like sax2. The other option you have is creating a custom mode line for Linux using the following link. You can use that to create a mode line to stick into your /etc/X11/xorg.conf file that will exactly duplicate what you set your Windows settings to. http://xtiming.sourceforge.net/cgi-bin/xtiming.pl Regards, dk
When you setup xorg, you don't have the option, at least, not when you use something like sax2. The other option you have is creating a custom mode line for Linux using the following link. You can use that to create a mode line to stick into your /etc/X11/xorg.conf file that will exactly duplicate what you set your Windows settings to. Actually, KDE has made this very easy. Right click on the Desktop, go to Configure Desktop, Display, set the size and refresh rate. I'm sure
David Krider wrote: this would require relogging in, and it should be the same as Windows. This is with 9.2, KDE 3.3.2 -- Joe Morris New Tribes Mission Email Address: Joe_Morris@ntm.org Registered Linux user 231871
David and Joe - that was it - thanks heaps :) Mike
Make sure that the monitpr frequency is different or the pixels. If it is like most modern monitors, it will remember the settings for the different size and frequency. Mike Dewhirst wrote:
I have just powered up a new d-link 4-Port KVM Switch DKVM-4K with three computers.
The WinXp desktop is set to 32-bit colour 1152 by 864 pixels and the monitor screen is adjusted accurately for width and height etc.
When I rotate the switch to SuSE 9.1 KDE the desktop is smaller and offset to the left some.
KDE settings are also 1152 by 864 and 76Hz
It doesn't make much difference between 76 and 68Hz so I left it at 76Hz.
I'm sure there is a reason for this I just haven't a clue.
The third screen is a Novell server which is character based and no problem.
Any ideas on how I can get the KDE screen larger on the same monitor as WinXP??
TIA
Mike
-- Joseph Loo jloo@acm.org
participants (17)
-
Brad Bourn
-
Bruce Marshall
-
David Krider
-
Eric Scott
-
Joe Morris (NTM)
-
Jon Nelson
-
Joseph Loo
-
Ken Schneider
-
Michael W Cocke
-
Mike
-
Mike Dewhirst
-
Paul W. Abrahams
-
Randall R Schulz
-
Sid Boyce
-
Steve
-
Sunny
-
Susemail