On Wednesday 06 October 2004 07:10 pm, Anders Johansson wrote:
On Thursday, 7 October 2004 04.30, Jerome Lyles wrote: Then I decided to
reboot and this has led to a serious problem for me.
When I reboot I get the message that: ' fsck failed, my / partition is being mounted read-only' and I must login as root. However when I run mount I get this: (none):~#mount
hm, why does this say (none)? It should say (repair filesystem)
It still says (none).
/dev/hdb2 on / type(reseiserfs) (rw) /dev/hdb2 on / type(reseiserfs) (rw) Two entries for the same file system! And the filesystem is mounted (rw).
This is still true also. Is there some way to check to see if /dev/hdb2 is really mounted twice? Is there some way to check to see if mount is broken?
Makes very little sense.
ps At the fsck failed login prompt I log in as root, enter init5 and restartx.
That is not a good idea. Few things will work, as you've noticed. You really can't afford to ignore a corrupt root file system. If fsck fails it is for a reason.
Maybe it would help if someone can tell me how to tell fsck to tell me why its failing, some kind of verbose mode boot option for fsck .
This does not seem to be file system problem according to the rescue script on the installation dvd. So why is fsck failing at boot? Why does resierfsck --fix-fixable complain it can't fix the file system because it was mounted with write permissions? Is it supposed to be unmounted to fix?
Yes, although it can work if it's mounted read only.
Why does mount show two instances of my root partition? How come umount doesn't get rid of one of them?
All excellent questions that I think will be hard to answer without seeing the system.
My suggestion would be to boot the rescue system on the DVD and run reiserfsck on /dev/hdb2 without mounting it
Using the rescue system on unmounted partitions I've run: e2fsck -f -c -v -B /dev/hda1 (my MBR is on this drive), checks out clean. I've run:reiserfsck /dev/hdb2 (/), reiserfsck --fix-fixable /dev/hdb2, reiserfsck --rebuild-tree /dev/hdb2,reiserfsck --rebuild-db /dev/hdb2 and reiserfsck --fix-fixable /dev/hdb2 again. There were some problems, all were corrected. Both filesystems have a clean bill of health according to reiserfsck and e2fsck. I found a clue, maybe: /etc/init.d/rc*.d/: more than one link for service: syslog,nmb, ntop, snort, portmap, resmgr, smbfs, splach_early, nfslock, nfs, nfsboot, acpid, alsasound, fbset, postgresql, running-kernel, sshd, splash, atalk, hwscan, xntpd, cups, ypserv, nscd, smb, nfsserver, postfix, apache2, cron, splash_late; this is a partial list. Should I bother trying to side step this problem by copying my root partition to my external firewire drive? If yes, what dd or rsync command would do this for me? Someone mentioned dd_rescue but there are no info or man pages for it. Also, should I move the MBR from hda to hdb since my / is there? If yes, how do I do that? Or should I put a clean install on /dev/hda1 or my extermal firewire drive and rsync my /dev/hdb2 / partition with the clean install? Both reiserfsck and e2fsck in rescue mode say /dev/hdb2 and /dev/hda1 respectively are clean. If this is true, what other part of the system could be causing this behavior? Thanks, Jerome