[opensuse] Converting file system
I have opensuse 10.2 installed on an HP Pavilion DV9208nr and have been using the xfs file system. I have been having random lockups and I am converting the file systems to ext3 from xfs. I have tar zipped my /var /tmp and /home partitions and then unmounted the partitions. They are logical volumes in a volume group. I have already converted them to ext3 and restored that data, all if fine. Now, how would I go about making the ext3 file system on a logical volume from the command line. I have been reading the man pages for lvm but I have not found it yet. I have been use the yast 2 module for lvm, but some will need to be made from the command line, like /opt, /usr, and /. I think I should be looking at using something like mkfs.ext3 with /dev/VolGroup00/opt00 as the partition. Still not sure though. Any ideas or thoughts would be greatly appreciated. -- John Registered Linux User 263680, get counted at http://counter.li.org -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On Saturday 21 April 2007 09:50, John Pierce wrote:
I have opensuse 10.2 installed on an HP Pavilion DV9208nr and have been using the xfs file system. I have been having random lockups and I am converting the file systems to ext3 from xfs.
I would not attribute these symptoms to XFS. XFS is a mature, stable file system, at least as much as any other available in SuSE Linux / openSUSE. If your system is unstable, I'd diagnose the problem, probably hardware-related in this case, before a protracted file system conversion that is unlikely to yield any improvement. If you can establish a login via ssh or telnet from another computer (only use telnet if the connection is via network link that's behind a firewall or otherwise isolated from the Internet) in advance of the symptom, then when the hang occurs you may still be able to run some commands such as ps, top or one of the various monitoring commands. The first thing to look for is processes hung in a 'D' wait state (using ps). This can sometimes be the result of software problems (disk or file system drive bugs) but when it occurs frequently is probably the sign of a problem with a disk drive, controller or bus interface component.
...
Any ideas or thoughts would be greatly appreciated.
You have mine.
-- John
Randall Schulz -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
I would not attribute these symptoms to XFS. XFS is a mature, stable file system, at least as much as any other available in SuSE Linux / openSUSE.
If your system is unstable, I'd diagnose the problem, probably hardware-related in this case, before a protracted file system conversion that is unlikely to yield any improvement.
If you can establish a login via ssh or telnet from another computer (only use telnet if the connection is via network link that's behind a firewall or otherwise isolated from the Internet) in advance of the symptom, then when the hang occurs you may still be able to run some commands such as ps, top or one of the various monitoring commands. The first thing to look for is processes hung in a 'D' wait state (using ps). This can sometimes be the result of software problems (disk or file system drive bugs) but when it occurs frequently is probably the sign of a problem with a disk drive, controller or bus interface component.
I have seen a lot of traffic on other lists concerning the xfs file system and random lockups caused by it. As posted earlier, I have already converted the /home, /var, and /tmp partitions to ext3 and the problems seem to be less frequent now. As to frequency, I was have at least 2 lock ups per day on average. During these lock ups I had tried to ssh in to the machine but always got a 'no route to host' response from ssh. Only powering off the machine would bring it back. I have not seen anything in the logs to indicate a problem. Thanks for your input. I will continue to try to get into the machine during any lock up though. -- John Registered Linux User 263680, get counted at http://counter.li.org -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On Sun, 22 Apr 2007, John Pierce wrote:
If your system is unstable, I'd diagnose the problem, probably hardware-related in this case, before a protracted file system conversion that is unlikely to yield any improvement.
Only powering off the machine would bring it back. I have not seen anything in the logs to indicate a problem.
Generally lockups are caused by hardware problems and are either heat, power or memory related. I suggest you run the memtest86 for at least 1 day and see if there are any errors show up. Also check your power supply and the CPU temp when the system locks up. Intermittent hardware problems can be very hard to diagnose. -- Regards, Graham Smith -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
Generally lockups are caused by hardware problems and are either heat, power or memory related.
I suggest you run the memtest86 for at least 1 day and see if there are any errors show up. Also check your power supply and the CPU temp when the system locks up. Intermittent hardware problems can be very hard to diagnose.
I ran memtest86 for over 9 hours last weekend and found no errors during the testing. Current cpu temp is running ~54C and ambient is ~21C. I do not believe that it has passed those points and if so not by much. I am going to keep monitoring, but I have already noticed that I have not had a lock up today, and I converted the /home partition to ext3 yesterday around 1400 hours. -- John Registered Linux User 263680, get counted at http://counter.li.org -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On Saturday 21 April 2007 13:06, John Pierce wrote:
Generally lockups are caused by hardware problems and are either heat, power or memory related.
I suggest you run the memtest86 for at least 1 day and see if there are any errors show up. Also check your power supply and the CPU temp when the system locks up. Intermittent hardware problems can be very hard to diagnose.
I ran memtest86 for over 9 hours last weekend and found no errors during the testing.
Current cpu temp is running ~54C and ambient is ~21C. I do not believe that it has passed those points and if so not by much.
I am going to keep monitoring, but I have already noticed that I have not had a lock up today, and I converted the /home partition to ext3 yesterday around 1400 hours.
If there were problems of the magnitude this suggest with the XFS code, then many, many other people (myself included) would be experiencing these or similar symptoms. I've been using XFS for several years and I've had but a very few lock-ups (fewer than five, I'd guess). I recall one that definitely happened on a Reiser FS partition. Processes accessing files and directories on that partition all hung in D wait states. (I have only one Reiser partition and chose it because that volume was large and meant to hold a lot of small files.) I have five other file system volumes, all XFS, across four drives, including a root and a home-dir partition. They have proved very reliable. I think it's far more likely that this is a hardware problem. Just because switching to ext3 suppresses the symptoms does not mean that the underlying problem is not still there, just that it is not manifest right now. If the hardware condition deteriorates, the symptoms or others may resurface over time. Consider, too, the "folk wisdom" that some hardware which exhibits no problems when running Windows sometimes displays unreliability when running Linux.
-- John
Randall Schulz -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
Consider, too, the "folk wisdom" that some hardware which exhibits no problems when running Windows sometimes displays unreliability when running Linux.
Well, I will continue to look. This machine has never had windows on it (save for the OEM Install of VISTA, which I deleted before it ever ran in userspace.), I partitioned the machine from the first boot when I got it home with suse 10.2. Thanks though, like I said I will keep looking. -- John Registered Linux User 263680, get counted at http://counter.li.org -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
Hello,
On Sat, 21 Apr 2007 15:41:41 -0500
"John Pierce"
Consider, too, the "folk wisdom" that some hardware which exhibits no problems when running Windows sometimes displays unreliability when running Linux.
Well, I will continue to look. This machine has never had windows on it (save for the OEM Install of VISTA, which I deleted before it ever ran in userspace.), I partitioned the machine from the first boot when I got it home with suse 10.2.
Thanks though, like I said I will keep looking.
If you want to track the problem, then there is a netconsole. (A serial port is very usable, but the HP DV9208nr note hasn't a serial port) And an another machine is necessary to use a netconsole. The way to use a netconsole is written in /usr/src/linux/Documentation/networking/netconsole.txt. If the kernel-source package has not been installed yet, you need to install it. As you read the document, E.g. Suppose set up like the following two Linux boxes. [target] -> 192.168.1.2 udp 6665(default) [remote] -> 192.168.1.3 udp 6666(default) 12:34:56:78:9a:bc target:~ # insmod /lib/modules/`uname -r`/kernel/drivers/net/netconsole.ko \ netconsole=@192.168.1.2/eth0,@192.168.1.3/12:34:56:78:9a:bc remote:~ # netcat -u -l -p 6666 And you would like to check whether it goes well certainly. target:~ # echo 1 > /proc/sys/kernel/sysrq target:~ # echo m > /proc/sysrq-trigger After this is performed, the log should be displayed in the remote linux box. If nothing is displayed, then I suspect that an UDP packet on the remote box or the target box (or both) is dropped by firewall. Check /var/log/firewall and pass a udp packet for logging agent. As for the setting, that's all. And you would have to reproduce the random lockup on the target box. :) hope this helps Thanks, eshsf -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 The Saturday 2007-04-21 at 13:24 -0700, Randall R Schulz wrote:
If there were problems of the magnitude this suggest with the XFS code, then many, many other people (myself included) would be experiencing these or similar symptoms.
Very true.
Just because switching to ext3 suppresses the symptoms does not mean that the underlying problem is not still there, just that it is not manifest right now. If the hardware condition deteriorates, the symptoms or others may resurface over time.
There is another explanation: the disk had bad sectors. Just by reformatting and rewriting to any format (even xfs again) causes those bad sectors to be remapped, and thus, the problem clears itself. - -- Cheers, Carlos E. R. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.5 (GNU/Linux) Comment: Made with pgp4pine 1.76 iD8DBQFGLKGDtTMYHG2NR9URAqREAJoDI/jEolEuQB7bHzCBWlBbB+EDmQCeONEe XE/eAPtoFI2bVLrE7q40now= =x8dK -----END PGP SIGNATURE----- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
John, That's not a problem of the file system. I use xfs with myth and I have not had any problems. Two lockup a day something is not right. Just do the basic things. 1. Look at hardware problems. Some are easy to find some are not :-( Load the installation opensuse dvd and run the memory test for many hours. Be sure you have not errors there. Memory errors will produce those problems and the memory test works very well to find them. 2. kcontrol/KDE components/session manger/Start with an empty session. Then use the system and see what happen 3. If it does not do it, just leave like that and stop or remove zen beagle This 3 is unlikely to cause those type of problems. 4. You can add in the kicker the "RUnaway Process Catcher" It is a neat tool, however those process usually do not lock the system, a faulty memory will do it. Good luck -=terry(Denver)=- On Sat, 2007-04-21 at 12:45 -0500, John Pierce wrote:
I would not attribute these symptoms to XFS. XFS is a mature, stable file system, at least as much as any other available in SuSE Linux / openSUSE.
If your system is unstable, I'd diagnose the problem, probably hardware-related in this case, before a protracted file system conversion that is unlikely to yield any improvement.
If you can establish a login via ssh or telnet from another computer (only use telnet if the connection is via network link that's behind a firewall or otherwise isolated from the Internet) in advance of the symptom, then when the hang occurs you may still be able to run some commands such as ps, top or one of the various monitoring commands. The first thing to look for is processes hung in a 'D' wait state (using ps). This can sometimes be the result of software problems (disk or file system drive bugs) but when it occurs frequently is probably the sign of a problem with a disk drive, controller or bus interface component.
I have seen a lot of traffic on other lists concerning the xfs file system and random lockups caused by it. As posted earlier, I have already converted the /home, /var, and /tmp partitions to ext3 and the problems seem to be less frequent now. As to frequency, I was have at least 2 lock ups per day on average.
During these lock ups I had tried to ssh in to the machine but always got a 'no route to host' response from ssh.
Only powering off the machine would bring it back. I have not seen anything in the logs to indicate a problem.
Thanks for your input. I will continue to try to get into the machine during any lock up though. -- John Registered Linux User 263680, get counted at http://counter.li.org
-- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
John Pierce wrote:
I would not attribute these symptoms to XFS. XFS is a mature, stable file system, at least as much as any other available in SuSE Linux / openSUSE.
If your system is unstable, I'd diagnose the problem, probably hardware-related in this case, before a protracted file system conversion that is unlikely to yield any improvement.
If you can establish a login via ssh or telnet from another computer (only use telnet if the connection is via network link that's behind a firewall or otherwise isolated from the Internet) in advance of the symptom, then when the hang occurs you may still be able to run some commands such as ps, top or one of the various monitoring commands. The first thing to look for is processes hung in a 'D' wait state (using ps). This can sometimes be the result of software problems (disk or file system drive bugs) but when it occurs frequently is probably the sign of a problem with a disk drive, controller or bus interface component.
I have seen a lot of traffic on other lists concerning the xfs file system and random lockups caused by it. As posted earlier, I have already converted the /home, /var, and /tmp partitions to ext3 and the problems seem to be less frequent now. As to frequency, I was have at least 2 lock ups per day on average.
During these lock ups I had tried to ssh in to the machine but always got a 'no route to host' response from ssh.
Only powering off the machine would bring it back. I have not seen anything in the logs to indicate a problem.
Thanks for your input. I will continue to try to get into the machine during any lock up though.
try this next time http://www.flickr.com/photos/qnr/458998683/ -- Hans Krueger hkr@hanskruegerenterprizes.com mailto:hanskrueger@adelphia.net registered Linux user 289023 411024 -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
|-----Original Message----- |From: John Pierce [mailto:john.j35@gmail.com] |I have seen a lot of traffic on other lists concerning the xfs |file system and random lockups caused by it. As posted |earlier, I have already converted the /home, /var, and /tmp |partitions to ext3 and the problems seem to be less frequent |now. As to frequency, I was have at least 2 lock ups per day |on average. To ease the io-traffic and thus improve performance mount all your partitions with options: 'noatime,nodiratime,logbufs=8' logbufs is bsize/sectsz default(4096/512) You find the settings with 'xfs_info filesystem'. Unless you run lvm/striping/mirroring there is no need for other options. -- MortenB -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On Monday 23 April 2007 01:33, Morten Bjørnsvik wrote:
...
To ease the io-traffic and thus improve performance mount all your partitions with options: 'noatime,nodiratime,logbufs=8' logbufs is bsize/sectsz default(4096/512) You find the settings with 'xfs_info filesystem'.
Often I need to know when I last accessed a file. If you do, too, then don't use the noatime. Likewise for directories and nodiratime.
...
-- MortenB
Randall Schulz -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 The Monday 2007-04-23 at 07:21 -0700, Randall R Schulz wrote:
To ease the io-traffic and thus improve performance mount all your partitions with options: 'noatime,nodiratime,logbufs=8' logbufs is bsize/sectsz default(4096/512) You find the settings with 'xfs_info filesystem'.
Often I need to know when I last accessed a file. If you do, too, then don't use the noatime. Likewise for directories and nodiratime.
I often use the modification date, sometimes the creation date, but I have never needed to use the access time. And as for dirs, simply by listing a dir that time is modified. - -- Cheers, Carlos E. R. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.5 (GNU/Linux) Comment: Made with pgp4pine 1.76 iD8DBQFGLUxmtTMYHG2NR9URAgG6AKCSMaf3bv0FRljl5VAvWFHtIJVblwCgltfw mmasuDKcdZVrdbygFFA8Sy0= =ZfL5 -----END PGP SIGNATURE----- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On Monday 23 April 2007 17:16, Carlos E. R. wrote:
The Monday 2007-04-23 at 07:21 -0700, Randall R Schulz wrote: ...
Often I need to know when I last accessed a file. If you do, too, then don't use the noatime. Likewise for directories and nodiratime.
I often use the modification date, sometimes the creation date, but I have never needed to use the access time. And as for dirs, simply by listing a dir that time is modified.
So be it. But if you need to find files knowing that you read them at a time certain (or approximate), then the Unix "access time" is what you need.
-- Cheers, Carlos E. R.
Randall Schulz -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 The Monday 2007-04-23 at 18:44 -0700, Randall R Schulz wrote:
I often use the modification date, sometimes the creation date, but I have never needed to use the access time. And as for dirs, simply by listing a dir that time is modified.
So be it.
But if you need to find files knowing that you read them at a time certain (or approximate), then the Unix "access time" is what you need.
Maybe... but then, some times I grep on all my home dir, so they all would have the same date. Now, there is beagle, that I suppose does some of that as well. Then, I restored a full backup after a disaster last February, so all that time info years old would have been deleted anyway... I mean, none of those stamps show real access dates. Not when "I" accessed them, anyway. I enabled noatime (ie, disable that timestamp) around two years ago, I think, but I forgot nodiratime: I'm activating that now, too. I prefer disk speed over that small info I don't use. I might use it, but... haven't found a good use for it yet, so off it goes :-) - -- Cheers, Carlos E. R. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.5 (GNU/Linux) Comment: Made with pgp4pine 1.76 iD8DBQFGLWeBtTMYHG2NR9URAp7eAJ0VkhVR0IKI7M/pGtQrfblopFogPwCfS5eM jOtOnrREW+9DoDXUM0vyK2g= =Wvkm -----END PGP SIGNATURE----- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
Carlos E. R. wrote:
The Monday 2007-04-23 at 18:44 -0700, Randall R Schulz wrote:
I often use the modification date, sometimes the creation date, but I have never needed to use the access time. And as for dirs, simply by listing a dir that time is modified. So be it.
But if you need to find files knowing that you read them at a time certain (or approximate), then the Unix "access time" is what you need.
Maybe... but then, some times I grep on all my home dir, so they all would have the same date. Now, there is beagle, that I suppose does some of that as well. Then, I restored a full backup after a disaster last February, so all that time info years old would have been deleted anyway... I mean, none of those stamps show real access dates. Not when "I" accessed them, anyway.
I enabled noatime (ie, disable that timestamp) around two years ago, I think, but I forgot nodiratime: I'm activating that now, too. I prefer disk speed over that small info I don't use. I might use it, but... haven't found a good use for it yet, so off it goes :-)
BTW Tend to use touch for modifying timestamps (not grep). Depending on the backup tool you are using you can retain the original time stamp of the files. Since my tape unit started playing up after the upgraded (long story with some very peculiar file system behaviour involved) I have been working on revising my backup procedures. Originally considered using time stamps to identify changed files, but initial tests with tar and kdar gave somewhat undesirable (and inconsistent) results. A little further investigation show that creation time, access time and modification seem to be effectively set inconsistently by different applications, so as a guide for bulk processing time stamps have limited use. e.g. many of the editors seem to backup the original file and and create a working copy which means access time == modification time == creation time on the apparent original whether a change has been made or not. So making such a change on a data directory structure would apparently not cause too many issues, but it may break the functionality of any software relies on time stamping to backup files. However, I would not deactivate time stamping on the part of the file system holding /var/spool/cron as time stamping is used by the run-cron script to establish when to fire the certain cron scripts.... There may be other application using file time stamping to control activity. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 The Tuesday 2007-04-24 at 10:45 +0100, G.T.Smith wrote: ...
BTW Tend to use touch for modifying timestamps (not grep).
You misunderstood me. I don't use grep to modify the timestamps. I use grep for grepping - and as a side effect, as the files are accessed, the timestamps change.
Depending on the backup tool you are using you can retain the original time stamp of the files.
Access means reading, not modifying - there is no point in backing up simply "accessed" files. Or you mean my restore method? I used a simple "copy file", mc, I think. All files got the date the backup was made. Not what I intended, but couldn't help it.
However, I would not deactivate time stamping on the part of the file system holding /var/spool/cron as time stamping is used by the run-cron script to establish when to fire the certain cron scripts.... There may be other application using file time stamping to control activity.
Not the access time. As I said, I have disabled that timestamp about two years ago with no side effects as far as I know. You know that by simply watching a log file, it's access time is continuously modified, creating write activity in the disk? That alone is reason enough to disable that time stamp on a portable. - -- Cheers, Carlos E. R. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.5 (GNU/Linux) Comment: Made with pgp4pine 1.76 iD8DBQFGLdpHtTMYHG2NR9URAimQAJwNh5wCZffubMxQ+wARS/HESCelRwCbBtKN qS4zGOc6l+aTsHkgrcUJUMg= =mgB/ -----END PGP SIGNATURE----- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
Hi,
On 4/24/07, G.T.Smith
access time == modification time == creation time
Note that ctime is *not* creation time, it's change time. It is set any time some metadata about the file is changed (user/group ownership, change in access rights, extended attributes). See the stat(2) manpage for more info. Unix filesystems have no concept of creation time. Joe -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
Joe Shaw wrote:
Hi,
On 4/24/07, G.T.Smith
wrote: access time == modification time == creation time
Note that ctime is *not* creation time, it's change time. It is set any time some metadata about the file is changed (user/group ownership, change in access rights, extended attributes). See the stat(2) manpage for more info. Unix filesystems have no concept of creation time.
Joe Thank you for this, it explains a lot to me about some of the inconsistencies I noted... I have come across a few references suggesting that time stamping in *NIX is a bit brain damaged, now I have a glimpse of why people have been making this assertion...
The conclusion I am coming too is the the current time stamping mechanism is inadequate for anything but the crudest of time related file management, and possibly not even that given the way some things manage files... I have been exploring various strategies for developing a backup and archival mechanism that is suitable for the SOHO linux workstation environment (particularly my own) since the apparent demise of my tape drive. This when working was hitting a problem in the amount of data being backed was beginning to exceed the capacity of the unit. I have been looking at way breaking down the amount of material backed up to reasonably quantities. This needs a mechanism to identify changed files and time stamping was an option...If time stamping was reliable and consistent this could have been used to flag files to backup, it is not so it cant **sigh** I -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 The Wednesday 2007-04-25 at 11:44 +0100, G.T.Smith wrote: ...
The conclusion I am coming too is the the current time stamping mechanism is inadequate for anything but the crudest of time related file management, and possibly not even that given the way some things manage files... .... and time stamping was an option...If time stamping was reliable and consistent this could have been used to flag files to backup, it is not so it cant **sigh** I
The modification time can be used to know when to backup the data, and the change time for the metadata. Meaning, if the modification timestamp has not changed, but the change timestamp has, it should mean that the file itself it's the same but the attributes have changed, and thus, backing up of the metadata only should suffice. In practice, you could compare all metadata: attributes, size, dates... if any of them changes, backup the file (not optimal). Another method, safer, is to also store a checksum: if some of the metadata changes (except size), calculate the new checksum to see if a backup is needed. For this, the metadata of the last backup should be saved on disk. A good backup program should do all this automatically. - -- Cheers, Carlos E. R. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.5 (GNU/Linux) Comment: Made with pgp4pine 1.76 iD8DBQFGL0QytTMYHG2NR9URAsAgAKCFCFfahmEw/JD9XavxxwAGOHOLngCfW0XW f2MpqVayQBuPcka+pA2NRNw= =+YNg -----END PGP SIGNATURE----- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
Carlos E. R. wrote:
The Wednesday 2007-04-25 at 11:44 +0100, G.T.Smith wrote:
...
The conclusion I am coming too is the the current time stamping mechanism is inadequate for anything but the crudest of time related file management, and possibly not even that given the way some things manage files... .... and time stamping was an option...If time stamping was reliable and consistent this could have been used to flag files to backup, it is not so it cant **sigh** I
The modification time can be used to know when to backup the data, and the change time for the metadata. Meaning, if the modification timestamp has not changed, but the change timestamp has, it should mean that the file itself it's the same but the attributes have changed, and thus, backing up of the metadata only should suffice.
In practice, you could compare all metadata: attributes, size, dates... if any of them changes, backup the file (not optimal). Another method, safer, is to also store a checksum: if some of the metadata changes (except size), calculate the new checksum to see if a backup is needed. For this, the metadata of the last backup should be saved on disk.
A good backup program should do all this automatically.
I theory good, in practice not.. well look at editor example originally quoted ... modification time does not always mean content has changed it merely means the modification time stamp has changed... it would be nice that everyone handled this time stamping issue in a well defined manner... in practice many people don't, this is not criticism this is just an observation BTW Yes a good backup program will do this ..... but the serious players would charge me more than the underlying hardware is worth! A couple of people have pointed me to some stuff on a separate sub-thread which I intend to look at.. and hopefully I can avoid having to write my own solution... -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On 4/25/07, G.T.Smith
Carlos E. R. wrote:
The Wednesday 2007-04-25 at 11:44 +0100, G.T.Smith wrote:
...
The conclusion I am coming too is the the current time stamping mechanism is inadequate for anything but the crudest of time related file management, and possibly not even that given the way some things manage files... .... and time stamping was an option...If time stamping was reliable and consistent this could have been used to flag files to backup, it is not so it cant **sigh** I
The modification time can be used to know when to backup the data, and the change time for the metadata. Meaning, if the modification timestamp has not changed, but the change timestamp has, it should mean that the file itself it's the same but the attributes have changed, and thus, backing up of the metadata only should suffice.
In practice, you could compare all metadata: attributes, size, dates... if any of them changes, backup the file (not optimal). Another method, safer, is to also store a checksum: if some of the metadata changes (except size), calculate the new checksum to see if a backup is needed. For this, the metadata of the last backup should be saved on disk.
A good backup program should do all this automatically.
I theory good, in practice not.. well look at editor example originally quoted ... modification time does not always mean content has changed it merely means the modification time stamp has changed... it would be nice that everyone handled this time stamping issue in a well defined manner... in practice many people don't, this is not criticism this is just an observation BTW
Yes a good backup program will do this ..... but the serious players would charge me more than the underlying hardware is worth! A couple of people have pointed me to some stuff on a separate sub-thread which I intend to look at.. and hopefully I can avoid having to write my own solution...
Neither of the solutions I posted earlier in this thread are dependent on timestamps. iirc: Especially for online backups rdiff-backup mentioned before ignores timestamps altogether. It calculates the MD5 for every file to see if any changes have been introduced. If they have it segments the file and drills down to find the smallest unit of change and only sends that data across the LAN/WAN. Greg -- Greg Freemyer The Norcross Group Forensics for the 21st Century -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
Greg Freemyer wrote:
On 4/25/07, G.T.Smith
wrote: Carlos E. R. wrote:
The Wednesday 2007-04-25 at 11:44 +0100, G.T.Smith wrote:
...
The conclusion I am coming too is the the current time stamping mechanism is inadequate for anything but the crudest of time related file management, and possibly not even that given the way some
manage files... .... and time stamping was an option...If time stamping was reliable and consistent this could have been used to flag files to backup, it is not so it cant **sigh** I
The modification time can be used to know when to backup the data, and the change time for the metadata. Meaning, if the modification timestamp has not changed, but the change timestamp has, it should mean that the file itself it's the same but the attributes have changed, and thus, backing up of the metadata only should suffice.
In practice, you could compare all metadata: attributes, size, dates... if any of them changes, backup the file (not optimal). Another method, safer, is to also store a checksum: if some of the metadata changes (except size), calculate the new checksum to see if a backup is needed. For
things this,
the metadata of the last backup should be saved on disk.
A good backup program should do all this automatically.
I theory good, in practice not.. well look at editor example originally quoted ... modification time does not always mean content has changed it merely means the modification time stamp has changed... it would be nice that everyone handled this time stamping issue in a well defined manner... in practice many people don't, this is not criticism this is just an observation BTW
Yes a good backup program will do this ..... but the serious players would charge me more than the underlying hardware is worth! A couple of people have pointed me to some stuff on a separate sub-thread which I intend to look at.. and hopefully I can avoid having to write my own solution...
Neither of the solutions I posted earlier in this thread are dependent on timestamps.
iirc: Especially for online backups rdiff-backup mentioned before ignores timestamps altogether. It calculates the MD5 for every file to see if any changes have been introduced. If they have it segments the file and drills down to find the smallest unit of change and only sends that data across the LAN/WAN.
Greg
Thanks. I have taken a look at your suggestions and that of Joachim, I am impressed with the description of both. XFS is not an option as it looks like I would have to do a lot of partition juggling to make a move. Unfortunately, as there are problems with my tape unit I am for the moment constrained by the need to backup to DVD/CD, and it is not immediately clear to me how either dirvish or rdiff-backup would effectively work in this situation, but I will be looking into this further. (Actually, the more I have explored the DVD/CD element of the problem, more I understand why no-one has produced a usable DVD/CD backup solution). I have already decided that for the code I am working on I will be using subversions backup procedures to dump the repositries, and I will be dumping MySQL databases as well (probably will eventually run queries that only dump new records or modified records), which largely leaves the backup of Documents, Spreadsheets, etc etc to something else (rdiff-backup seems to be the front runner here).. e-Mail structures are going to be a particular headache. t -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On Thursday 26 April 2007 06:43, G.T.Smith wrote: .................snip a lot....................
Neither of the solutions I posted earlier in this thread are dependent on timestamps.
iirc: Especially for online backups rdiff-backup mentioned before ignores timestamps altogether. It calculates the MD5 for every file to see if any changes have been introduced. If they have it segments the file and drills down to find the smallest unit of change and only sends that data across the LAN/WAN.
Greg
Thanks. I have taken a look at your suggestions and that of Joachim, I am impressed with the description of both. XFS is not an option as it looks like I would have to do a lot of partition juggling to make a move.
Unfortunately, as there are problems with my tape unit I am for the moment constrained by the need to backup to DVD/CD, and it is not immediately clear to me how either dirvish or rdiff-backup would effectively work in this situation, but I will be looking into this further. (Actually, the more I have explored the DVD/CD element of the problem, more I understand why no-one has produced a usable DVD/CD backup solution). I have already decided that for the code I am working on I will be using subversions backup procedures to dump the repositries, and I will be dumping MySQL databases as well (probably will eventually run queries that only dump new records or modified records), which largely leaves the backup of Documents, Spreadsheets, etc etc to something else (rdiff-backup seems to be the front runner here).. e-Mail structures are going to be a particular headache.
If you need to back up to DVD/CD what about Mondo/Mindi I used to use it exclusively for several years a while ago. Now I use Kdar but I back up to a spare hard drive. Just a thought. Bob S. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On Thursday 26 April 2007 20:40, Bob S wrote:
...
If you need to back up to DVD/CD what about Mondo/Mindi I used to use it exclusively for several years a while ago. Now I use Kdar but I back up to a spare hard drive.
You should be aware that Mondo is a disaster recovery / rescue solution, not technically backup software. Obviously, they have a lot in common, but basically, Mondo is not intended routine backup purposes.
Just a thought.
Likewise.
Bob S.
Randall Schulz -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 The Wednesday 2007-04-25 at 16:36 -0400, Greg Freemyer wrote:
Neither of the solutions I posted earlier in this thread are dependent on timestamps.
iirc: Especially for online backups rdiff-backup mentioned before ignores timestamps altogether. It calculates the MD5 for every file to see if any changes have been introduced. If they have it segments the file and drills down to find the smallest unit of change and only sends that data across the LAN/WAN.
I doubt that. rdiff-backup is fast, and calculating MD5 for all files is slow. I think it does that only for files it thinks that might have changed. Proof: Backing up my Mail list archive takes 4" right now. Calculating the md5sums of one of the same dir takes 47" (37" on a second run). Therefore, rdif. must be checking metadata instead. - -- Cheers, Carlos E. R. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.5 (GNU/Linux) Comment: Made with pgp4pine 1.76 iD8DBQFGMJV2tTMYHG2NR9URAs6tAJ9pAmBnG5E2WfuvpEEEP0vcaCIVrQCgjdfR Pwgs598jtSrAImJekaKQMBQ= =x6IY -----END PGP SIGNATURE----- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On 4/26/07, Carlos E. R.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
The Wednesday 2007-04-25 at 16:36 -0400, Greg Freemyer wrote:
Neither of the solutions I posted earlier in this thread are dependent on timestamps.
iirc: Especially for online backups rdiff-backup mentioned before ignores timestamps altogether. It calculates the MD5 for every file to see if any changes have been introduced. If they have it segments the file and drills down to find the smallest unit of change and only sends that data across the LAN/WAN.
I doubt that.
rdiff-backup is fast, and calculating MD5 for all files is slow. I think it does that only for files it thinks that might have changed.
Proof:
Backing up my Mail list archive takes 4" right now. Calculating the md5sums of one of the same dir takes 47" (37" on a second run). Therefore, rdif. must be checking metadata instead.
Your right. My backup is also too fast to actually do a hash of every file. My mistake. I checked the man page and found: --compare-hash: This is equivalent to '--compare-hash-at-time now' --compare-hash-at-time time: Compare a directory with the backup set at the given time. Regular files will be compared by computing their SHA1 digest on the source side and comparing it to the digest recorded in the metadata. Or if you really want a byte-by-byte compare: -compare-full: This is equivalent to '--compare-full-at-time now' --compare-full-at-time time: Compare a directory with the backup set at the given time. To compare regular files, the repository data will be copied in its entirety to the source side and compared byte by byte. This is the slowest but most complete compare option. I did not experiment with any of these options, so I don't know if you can simply add --compare_hash to a command line and get the backup decision restructured via SHA1, or if the above are only ways to run a verify pass. Personally I'm fine with basing the first decision on MetaData, then letting the rsync algorithm decide what exactly needs to be sent to keep the primary and backup in sync. Greg -- Greg Freemyer The Norcross Group Forensics for the 21st Century -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 The Wednesday 2007-04-25 at 20:22 +0100, G.T.Smith wrote:
A good backup program should do all this automatically.
I theory good, in practice not.. well look at editor example originally quoted ... modification time does not always mean content has changed it merely means the modification time stamp has changed... it would be nice that everyone handled this time stamping issue in a well defined manner... in practice many people don't, this is not criticism this is just an observation BTW
The things is, if the modification time is the same, the file data will still be the same and doesn't need to be backed up again. On the other hand, if it has changed, there is a doubt: either check a checksum and decide, or backup regardless. If the change time has changed, I understand that only the metadata should be saved - provided the previous test decided the data was the same.
Yes a good backup program will do this ..... but the serious players would charge me more than the underlying hardware is worth! A couple of people have pointed me to some stuff on a separate sub-thread which I intend to look at.. and hopefully I can avoid having to write my own solution...
Well, rsync does this kind of decissions, I think. And the sugested rdiff-backup improves on it keeping old versions too. The dissadvantage is that it doesn't compress data. As to backup to DVD, there is "dar" and "kdar". Plus "par", that I still haven't evaluated. And there are some other solutions in the distro, I think. - -- Cheers, Carlos E. R. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.5 (GNU/Linux) Comment: Made with pgp4pine 1.76 iD8DBQFGMJIqtTMYHG2NR9URAtz2AJ4ub5J9vRyxA2BlcAtFJ3v7ySNOZgCfbFvM Z1BLnwJq5lOomKCFR+Nr9GY= =5lbW -----END PGP SIGNATURE----- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
Carlos E. R. wrote:
people have pointed me to some stuff on a separate sub-thread which I intend to look at.. and hopefully I can avoid having to write my own solution...
Well, rsync does this kind of decissions, I think. And the sugested rdiff-backup improves on it keeping old versions too. The dissadvantage is that it doesn't compress data.
As to backup to DVD, there is "dar" and "kdar". Plus "par", that I still haven't evaluated.
kdar is the frontend to dar... after looking at it in more depth it became apparent that one would still have move the dar slices to an iso image and the iso image would then need to be burned. A further complication is although kdar apparently offers a 4Gb slice size for DVD the dvdtools burning tools will only accept a max of 2Gb per file (slice). (And not even that... setting slice size to 2Gb still caused problems). To get a listing from a dar rquires access to first and last slices, this would suggest the for an N slice backup one would need (N mod 2) + (N div 2) DVDSs for a two slice solution with first DVD containing slice 1 and slice N, kdar provides no mechanisms for such slice management across media. While a bulk restore would be ok do not think this would work to well for retrieving a single file (one would not only have to know which slice a file was in but which DVD, and heaven forbid if the file is split between two slices on two DVDs). dar has some other limitations as well which took it out of picture.... A brief look at par (which dar seems to link into) suggests this not suitable for my requirements... What I intend to look at is the possibility of using to rdiff-backup to build its structures and create a tar image of this structure (which probably be ok for disaster recovery but not brilliant for archive retrieval). Dirvish is impressive but I think it would best with a SAN or dedicated backend backup server solution.
And there are some other solutions in the distro, I think.
I have been looking... I dunno after years of being a sys-admin one of things that still seems to give the most grief is backup >:o -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 The Thursday 2007-04-26 at 13:52 +0100, G.T.Smith wrote:
Carlos E. R. wrote:
As to backup to DVD, there is "dar" and "kdar". Plus "par", that I still haven't evaluated. kdar is the frontend to dar... after looking at it in more depth it became apparent that one would still have move the dar slices to an iso image and the iso image would then need to be burned.
Obviously. But I understand it can burn them in parallel to the backup procedure continuing.
A further complication is although kdar apparently offers a 4Gb slice size for DVD the dvdtools burning tools will only accept a max of 2Gb per file (slice).
Yes. I haven't looked at it for use with dar, but you know that you are not limited to iso images for dvd burning: I myself use XFS, wich doesn't have that limitation. I wonder I f I could use them from dar.
(And not even that... setting slice size to 2Gb still caused problems).
At 2 GB, 400 MB are wasted. Better four slices of 1.1 GB.
To get a listing from a dar rquires access to first and last slices, this would suggest the for an N slice backup one would need (N mod 2) + (N div 2) DVDSs for a two slice solution with first DVD containing slice 1 and slice N, kdar provides no mechanisms for such slice management across media. While a bulk restore would be ok do not think this would work to well for retrieving a single file (one would not only have to know which slice a file was in but which DVD, and heaven forbid if the file is split between two slices on two DVDs). dar has some other limitations as well which took it out of picture....
I still have to evaluate dar, but I should think it makes an index to facilitate this. Any good backup solution should handle this. I miss pctools backup from 198X-190X, for MsDos... it could backup to floppies files way larger than a single floppy, and restore without problems - even with media errors. My backups from that era still work.
A brief look at par (which dar seems to link into) suggests this not suitable for my requirements...
No, par is a complement to protect against errors in media, so that files are still recoverable. It is not a backup solution on itself.
What I intend to look at is the possibility of using to rdiff-backup to build its structures and create a tar image of this structure (which probably be ok for disaster recovery but not brilliant for archive retrieval). Dirvish is impressive but I think it would best with a SAN or dedicated backend backup server solution.
Both rdiff-backup and Dirvish work on a similar principle, and backup to disk. You can use Dirvish (or rdiff-backup) for day to day backup, then backup to external media (tape, dvd, whatever) from the Dirvish made backup, instead of from the original (they mention this procedure on their faq).
And there are some other solutions in the distro, I think.
I have been looking... I dunno after years of being a sys-admin one of things that still seems to give the most grief is backup >:o
Of course. - -- Cheers, Carlos E. R. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.5 (GNU/Linux) Comment: Made with pgp4pine 1.76 iD8DBQFGMLdUtTMYHG2NR9URAl5AAJwJwWWQPQ3lE4Ek7OTzZSbXrJbWPwCeI95Y 3prdZP0jn2V+WN88D/0GIrc= =RqCm -----END PGP SIGNATURE----- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
Carlos E. R. wrote:
The Thursday 2007-04-26 at 13:52 +0100, G.T.Smith wrote:
Carlos E. R. wrote:
As to backup to DVD, there is "dar" and "kdar". Plus "par", that I still haven't evaluated. kdar is the frontend to dar... after looking at it in more depth it became apparent that one would still have move the dar slices to an iso image and the iso image would then need to be burned.
Obviously. But I understand it can burn them in parallel to the backup procedure continuing.
Not strictly parallel, it allows for a script to be executed after a slice has being written (or before) and one could therefore background a burn process... but if one is writing in a scenario where the media size < backup size you will get no benefit or if no of slices > 1 per media and no of media > 1 some interesting juggling will need to be performed by the script to identify what needed to be written. I have not eliminated tar from the equation, it has more options than dar, there is a media size limit support and there does seem the possibility of piping tar output to a DVD or DVD image .... but it does use either the time stamps or an record of what has been backed up As I am no longer using tape there are a couple of options available to me that do not work with tape output....
A further complication is although kdar apparently offers a 4Gb slice size for DVD the dvdtools burning tools will only accept a max of 2Gb per file (slice).
Yes. I haven't looked at it for use with dar, but you know that you are not limited to iso images for dvd burning: I myself use XFS, wich doesn't have that limitation. I wonder I f I could use them from dar.
This 2Gb file size limit seems to be a linux thing with DVDs not sure whether this is just t wodim or what. When I tried to generate an UDF image got similar problems. Optical media have somewhat different write characteristics that tend seem to make creating an image and writing the image the most effective way of using them, DVD+RW is the only media that will allow random access, and not all devices will support writing to it.
(And not even that... setting slice size to 2Gb still caused problems).
At 2 GB, 400 MB are wasted. Better four slices of 1.1 GB.
Still 400k wasted with 4Gb according to kDar (DVD slice is set as 4.3Gb)
To get a listing from a dar rquires access to first and last slices, this would suggest the for an N slice backup one would need (N mod 2) + (N div 2) DVDSs for a two slice solution with first DVD containing slice 1 and slice N, kdar provides no mechanisms for such slice management across media. While a bulk restore would be ok do not think this would work to well for retrieving a single file (one would not only have to know which slice a file was in but which DVD, and heaven forbid if the file is split between two slices on two DVDs). dar has some other limitations as well which took it out of picture....
I still have to evaluate dar, but I should think it makes an index to facilitate this. Any good backup solution should handle this.
External index an option, but then one has to transfer the index to DVD to make it available. (loops within loops) And it does not deal with issue of which part of the media set a particular file is on.
I miss pctools backup from 198X-190X, for MsDos... it could backup to floppies files way larger than a single floppy, and restore without problems - even with media errors. My backups from that era still work.
But beware the dodgy floppy ;-) [ Stuff deleted ] -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 The Friday 2007-04-27 at 12:50 +0100, G.T.Smith wrote:
became apparent that one would still have move the dar slices to an iso image and the iso image would then need to be burned.
Obviously. But I understand it can burn them in parallel to the backup procedure continuing. Not strictly parallel, it allows for a script to be executed after a slice has being written (or before) and one could therefore background a burn process... but if one is writing in a scenario where the media size < backup size you will get no benefit or if no of slices > 1 per media and no of media > 1 some interesting juggling will need to be performed by the script to identify what needed to be written.
I haven't really tried it yet, it is on my to do list. I do my backups to other HD and manually to DVD. Not an automated solution.
A further complication is although kdar apparently offers a 4Gb slice size for DVD the dvdtools burning tools will only accept a max of 2Gb per file (slice).
Yes. I haven't looked at it for use with dar, but you know that you are not limited to iso images for dvd burning: I myself use XFS, wich doesn't have that limitation. I wonder I f I could use them from dar. This 2Gb file size limit seems to be a linux thing with DVDs not sure whether this is just t wodim or what.
I believe it is a limitation of the iso format. As I mentioned, I burn DVDs in XFS format and I have no such limitation.
(And not even that... setting slice size to 2Gb still caused problems).
At 2 GB, 400 MB are wasted. Better four slices of 1.1 GB. Still 400k wasted with 4Gb according to kDar (DVD slice is set as 4.3Gb)
4 * 1.1 = 4.4, no waste.
I still have to evaluate dar, but I should think it makes an index to facilitate this. Any good backup solution should handle this. External index an option, but then one has to transfer the index to DVD to make it available. (loops within loops) And it does not deal with issue of which part of the media set a particular file is on.
I don't know how dar/kdar deal with it, but it should be done automatically. Ie, choose a file, and the program should ask for the exact dvd(s) to be mounted. The user should not be bothered with details.
I miss pctools backup from 198X-190X, for MsDos... it could backup to floppies files way larger than a single floppy, and restore without problems - even with media errors. My backups from that era still work.
But beware the dodgy floppy ;-)
I do have some floppies with errors. One bad error on a set of 80 floppies, data still recoverable with error correction techniques automatically applied by the program. I say I miss that software for a reason... But present day floppies are very bad quality. When I use one of those I save in duplicate or triplicate. Very unreliable nowdays. - -- Cheers, Carlos E. R. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.5 (GNU/Linux) Comment: Made with pgp4pine 1.76 iD8DBQFGMeiZtTMYHG2NR9URAhuNAJ4sMfkBfiGohgLA973vj1WIja1zdgCeKbqR ehuxE1qWlSg+uF3KTWXbgkU= =O4FS -----END PGP SIGNATURE----- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
Carlos E. R. wrote:
The Friday 2007-04-27 at 12:50 +0100, G.T.Smith wrote:
I haven't really tried it yet, it is on my to do list. I do my backups to other HD and manually to DVD. Not an automated solution.
A further complication is although kdar apparently offers a 4Gb slice size for DVD the dvdtools burning tools will only accept a max of 2Gb per file (slice). Yes. I haven't looked at it for use with dar, but you know that you are not limited to iso images for dvd burning: I myself use XFS, wich doesn't have that limitation. I wonder I f I could use them from dar. This 2Gb file size limit seems to be a linux thing with DVDs not sure whether this is just t wodim or what.
I believe it is a limitation of the iso format. As I mentioned, I burn DVDs in XFS format and I have no such limitation.
I came across something which very was critical of the decision to set this limit for linux iso support, but again cannot remember whether this is was criticism of wodim or something else or a completed uninformed rant (iso9660 has a number of non-standard extensions which can let you get a way with a lot of things). I would not like to try setting up XFS on DVD- media which is effectively sequential... XFS is not a universally used file system and there are apparently issues in compatability between different distributions on Linux which are although recoverable, I would be wary of using on optical media. I tend towards the view that when using portable and removable media such CD/DVD one should stick to format which is not going to be too closely tied to a particular configuration. Also issues such as media detection and calculating available storage come to mind.... BTW Did try the experiment of creating an XFS file system on a DVD out of curiosity, and my writer just did not want to know, (which did not surprise me a lot to be honest)... had I have a feeling I played around with this a long time ago and it was a non starter then...
(And not even that... setting slice size to 2Gb still caused problems). At 2 GB, 400 MB are wasted. Better four slices of 1.1 GB. Still 400k wasted with 4Gb according to kDar (DVD slice is set as 4.3Gb)
4 * 1.1 = 4.4, no waste.
BTW Spot the my not so deliberate mistake .... the 4.7 on the box == 4.3 in computer terms ;-) Actually, in a multi media scenario would probably go for slice size that would balance asynchronous dvd writes.... if you have 5Gb of data you will still be using 8.6 Gb of media and at this moment multi-session data writing is not available in wodim...
I still have to evaluate dar, but I should think it makes an index to facilitate this. Any good backup solution should handle this. External index an option, but then one has to transfer the index to DVD to make it available. (loops within loops) And it does not deal with issue of which part of the media set a particular file is on.
I don't know how dar/kdar deal with it, but it should be done automatically. Ie, choose a file, and the program should ask for the exact dvd(s) to be mounted. The user should not be bothered with details.
It does not, it will know the slice but not the DVD, it does not manage removable media merely the slices that are put on the media... it seems to up to the user to organise the CD/DVD writing... If the DVD is a empty mounted file system it could write to that (but see above...) [Stuff deleted] -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On Thursday 26 April 2007 04:50, Carlos E. R. wrote:
...
The things is, if the modification time is the same, the file data will still be the same
That does not necessarily follow. There is a system call that allows user code to arbitrarily change the file time (it's used, in part, by backup programs that want to reset the file times of the restored file to those that were in effect for the file when it was saved to the backup medium). I know it's not typical, but then, once upon a time, I wrote a Unix tool that would save specified files' times, invoke an arbitrary command and when that command exited, restore the files' times. We ended up using it quite a lot, I recall, though for some reason I don't quite recall why (it was a long time ago).
and doesn't need to be backed up again. On the other hand, if it has changed, there is a doubt: either check a checksum and decide, or backup regardless.
The point is, the only 100% reliable way to tell if a file has changed is to compare it to the original. A checksum (not necessarily MD5) is the next best. Modification time alone is the weakest and least reliable way.
...
-- Cheers, Carlos E. R.
Randall Schulz -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 The Thursday 2007-04-26 at 06:45 -0700, Randall R Schulz wrote:
On Thursday 26 April 2007 04:50, Carlos E. R. wrote:
...
The things is, if the modification time is the same, the file data will still be the same
That does not necessarily follow. There is a system call that allows user code to arbitrarily change the file time (it's used, in part, by backup programs that want to reset the file times of the restored file to those that were in effect for the file when it was saved to the backup medium).
The command "touch" can do that, of course. But that is not normal usage. If somebody is "messing up" the timestamps, that his fault :-p
and doesn't need to be backed up again. On the other hand, if it has changed, there is a doubt: either check a checksum and decide, or backup regardless.
The point is, the only 100% reliable way to tell if a file has changed is to compare it to the original.
Of course.
A checksum (not necessarily MD5) is the next best. Modification time alone is the weakest and least reliable way.
People will also tell you that the only good backup is a full backup, despising incremental backups - for those reasons precisely. Depends on how "paranoid" you are, but sometimes it's better to just do a full backup. - -- Cheers, Carlos E. R. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.5 (GNU/Linux) Comment: Made with pgp4pine 1.76 iD8DBQFGMLRYtTMYHG2NR9URAlO8AJ9hC4TAlNilXYAXhM7Ceqas7Brp/QCfbe5P Gtm0k4u+x/XcsQTRmxTLXig= =QVtq -----END PGP SIGNATURE----- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On Wed, 25 Apr 2007 20:22:47 +0100
"G.T.Smith"
I theory good, in practice not.. well look at editor example originally quoted ... modification time does not always mean content has changed it merely means the modification time stamp has changed... it would be nice that everyone handled this time stamping issue in a well defined manner... in practice many people don't, this is not criticism this is just an observation BTW We go back into the history, this was intended. The Unix/Linux touch command was used to change the time, specifically for use by the make(1) command. Most source control systems can handle this feature reasonably well.
--
Jerry Feldman
On 4/25/07, G.T.Smith
Joe Shaw wrote:
Hi,
On 4/24/07, G.T.Smith
wrote: access time == modification time == creation time
Note that ctime is *not* creation time, it's change time. It is set any time some metadata about the file is changed (user/group ownership, change in access rights, extended attributes). See the stat(2) manpage for more info. Unix filesystems have no concept of creation time.
Joe Thank you for this, it explains a lot to me about some of the inconsistencies I noted... I have come across a few references suggesting that time stamping in *NIX is a bit brain damaged, now I have a glimpse of why people have been making this assertion...
The conclusion I am coming too is the the current time stamping mechanism is inadequate for anything but the crudest of time related file management, and possibly not even that given the way some things manage files...
I have been exploring various strategies for developing a backup and archival mechanism that is suitable for the SOHO linux workstation environment (particularly my own) since the apparent demise of my tape drive. This when working was hitting a problem in the amount of data being backed was beginning to exceed the capacity of the unit. I have been looking at way breaking down the amount of material backed up to reasonably quantities. This needs a mechanism to identify changed files and time stamping was an option...If time stamping was reliable and consistent this could have been used to flag files to backup, it is not so it cant **sigh** I
If you are still planning to use tape, then XFS has some advanced dump options that allow you to do various types of differential backups. I think they have a separate attribute for each file to track which level of backup was last done. Personally for normal short term backups (ie. up to 6 months or so, but not years) I use rdiff-backup. I have used it both to a local dedicated mirror-set and remotely to a backup server. Both ways seem to work fine. Greg -- Greg Freemyer The Norcross Group Forensics for the 21st Century -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
participants (13)
-
Bob S
-
Carlos E. R.
-
eshsf
-
G.T.Smith
-
Graham Smith
-
Greg Freemyer
-
Hans Krueger
-
Jerry Feldman
-
Joe Shaw
-
John Pierce
-
Morten Bjørnsvik
-
Randall R Schulz
-
Teruel de Campo MD