On Friday 01 April 2005 03:57 am, Hylton Conacher (ZR1HPC) wrote:
I have seen many folk who have signatures stating that the machine has been running for something like 400 days.
Whilst I applaud this type of reliability, I am wondering about the actual fs as the machine isn't rebooted so that fsck can check the partitions.
I am perhaps paranoid about keeping the fs in tip top shape but it is the basis that we all rely on.
Yeah, the paranoia's unfounded. A good filesystem doesn't have problems like FAT. :)
so my question is this: Can a system that has such uptime have its all its fs checked and not be rebooted?
Could a system Linux boot floppy specific to the system be inserted, mounted, and then be told to unmount the running systems partitions listed in the fstab(except the floppy, and fsck them and then re-mount them, without losing the uptime figure?
A boot floppy won't do it. Well, maybe if you gave VMWare access to the physical drive and booted another system, but that's just asking for trouble. What you're looking for is "telinit". If you just want to maintain uptime, but can take the server off-line for a time, run "telinit 1" to drop down to single user mode, then either unmount the partitions one at a time, or, as long as thy don't need major work, most fscks can handle a mounted partition if it's mounted readonly. Run "mount /path/to/partition -o remount,ro" to remount the partition read-only, and then "mount /path/to/partition -o remount,rw" to reenable read-write. To bring the system back up to full multi-user mode, use either "telinit 3" or "telinit 5" depending on whether you want console or xdm login, respectively.
Just wondering about the fs of those MAJOR uptime hosts
They don't need fsck - they use filesystems that self-maintain. Like, most anything that Windows doesn't use (ext, reiser, HFS, etc) --Danny, whose longest-uptime machine is just barely a year (stupid extended power failure), and isn't directly accessible from the internet