Ken you may be the only one who can shed any light on my question as it is difficult and needs an answer from a long standing UNIX/Linux user. That being said anyone else who knows the answer or can contribute please jump in Linux coming from a Unix Server now being a desktop solution I was wondering what fault tolerance there is in respect to verifying reads and writes to the HDD or NFS Server. Some server O/S employ HOTFIX to ensure a file is written correctly to the hard Disk and Transitional Tracking. Are any of these left over in a Linux Workstation. This goes with my question about verification of files. I was faithfully backing up my /home directory and sub-directories and list my entire system which was my fault and a long story. When it came to recover the backup achieve the achieve header had been corrupted. I used KDAR to do the backup and it has no verify options and KDAR could not read the achieve file. My question to you is are there in inbuilt file integrity listed above to ensure every write and read are performed actually and checked by any O/S fault tolerance systems Regards to all Scott