![](https://seccdn.libravatar.org/avatar/1a9af646da2d1aa9023e5dfbb7cd574d.jpg?s=120&d=mm&r=g)
While the new logging filesystems are a great improvement my experience is that they can't survive forever in the real world without an occasional rebuild or fsck. This list has had warnings by people burnt by reiserfs. I haven't (yet) lost any data but have had some scary times. This hasn't been bugs in reiserfs (3.6) itself as most instability was tracked to (very marginally) flakey RAM. However while the glitches were caused by corrupt RAM they left me with faults in the filesystem, faults that persisted across reboots. These included un-list-able and un-cat-able files. ie: read or ask the size of that file and it's bye-bye to that terminal. It made the whole system unuseable as processes "trod on the cracks" and hung. Backups? Hah, not with that file in the partition. So I think a lot of bad press stems from the misconception that any filesystem can avoid bitrot forever without an fsck. But this is painful to do by hand, I have to boot a rescue system and run reiserfsck by hand, to do the root and system partitions. How can I get back the old behaviour an fsck happening during reboot every x reboots or y days? Or, how can I trigger an "fsck reboot"? TIA, michaelj PS: I've just realized I can do it by adding an fsck into the linuxrc script of a cooked initrd image. That would give me an "fsck boot" option in grub. Comments? -- Michael James michael.james@csiro.au System Administrator voice: 02 6246 5040 CSIRO Bioinformatics Facility fax: 02 6246 5166