On Thu, 2006-11-02 at 14:14 +1000, Intrusion Detection Account 000 wrote:
Ken you may be the only one who can shed any light on my question as it is difficult and needs an answer from a long standing UNIX/Linux user. That being said anyone else who knows the answer or can contribute please jump in
Linux coming from a Unix Server now being a desktop solution I was wondering what fault tolerance there is in respect to verifying reads and writes to the HDD or NFS Server.
Some server O/S employ HOTFIX to ensure a file is written correctly to the hard Disk and Transitional Tracking.
Are any of these left over in a Linux Workstation. This goes with my question about verification of files.
I was faithfully backing up my /home directory and sub-directories and list my entire system which was my fault and a long story. When it came to recover the backup achieve the achieve header had been corrupted. I used KDAR to do the backup and it has no verify options and KDAR could not read the achieve file.
My question to you is are there in inbuilt file integrity listed above to ensure every write and read are performed actually and checked by any O/S fault tolerance systems
I tried to use kdar and had nothing but problems with it. I think the problems you experienced are with the application kdar and not with the linux system it self. If _reliable_ backups are a must I suggest using a commercial backup program like Microlite BackupEdge. It employs good verification to ensure a reliable backup has been preformed. You can download a 60 day eval for free to test. -- Ken Schneider UNIX since 1989, linux since 1994, SuSE since 1998 --------------------------------------------------------------------- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org