Daniel Hanke schrieb:
Am Mittwoch 25 Mai 2005 10:58 schrieb Peter Manias:
Hallo,
den Zustand mit den Raids haben wir laufend, auch nach einem Reboot. Überprüfen der SCSI- Verbindungen hat auch nichts geändert.
Gibt es denn eine Möglichkeit, mit "Bordmitteln" oder einem ggf. zu beschaffenden Tool den hardwaremäßigen Zustand der Platten zu prüfen? Die eingebauten Platten sind IBM Ultrastar.
Weil es scheinbar auch Probleme mit dem Bandlaufwerk gibt, könnte ich mir auch eine Macke im SCSI-Controller vorstellen, dessen Typ ich aber (noch) nicht kenne. Gibt es da auch Testtools?
Zu dem Tip mit dem Backup auf die externe Platte: Erkennt Linux die Platte automatisch an einem USB-Port oder muss da noch was konfiguriert werden? Erzeugt TAR ein Image, dass ich dann ggf 1:1 auf eine neue Platte übertragen kann?
Gruß Pit
- Was hat denn dmesg gesagt?
Das ist auszugsweise die Ausgabe des Befehls dmesg . Reicht das für ein paar zusätzliche Tips? Habe leider zusätzlich das Problem, dass ich eine (neue) externe Platte (Angeschlossen über USB) nicht partitionieren kann. Der Rechner hängt sich scheinbar auf, es geht dann nur noch Power off. md: adding sda5 ... md: created md2 md: bind<sda5,1> md: bind<sdb5,2> md: running: <sdb5><sda5> md: sdb5's event counter: 0000005a md: sda5's event counter: 00000042 md: superblock update time inconsistency -- using the most recent one md: freshest: sdb5 md: kicking non-fresh sda5 from array! md: unbind<sda5,1> md: export_rdev(sda5) md: md2: raid array is not clean -- starting background reconstruction md: RAID level 1 does not need chunksize! Continuing anyway. md2: max total readahead window set to 508k md2: 1 data-disks, max readahead per data-disk: 508k raid1: device sdb5 operational as mirror 1 raid1: md2, not all disks are operational -- trying to recover array raid1: raid set md2 active with 1 out of 2 mirrors md: updating md2 RAID superblock on device md: sdb5 [events: 0000005b]<6>(write) sdb5's sb offset: 2104384 md: recovery thread got woken up ... md2: no spare disk to reconstruct array! -- continuing in degraded mode md3: no spare disk to reconstruct array! -- continuing in degraded mode md: recovery thread finished ... [events: 00000000] md: invalid raid superblock magic on md2 md: md2 has invalid sb, not importing! md: no nested md device found md: considering sdb2 ... md: adding sdb2 ... md: adding sda2 ... md: created md1 md: bind<sda2,1> md: bind<sdb2,2> md: running: <sdb2><sda2> md: sdb2's event counter: 00000040 md: sda2's event counter: 0000005d md: superblock update time inconsistency -- using the most recent one md: freshest: sda2 md: kicking non-fresh sdb2 from array! md: unbind<sdb2,1> md: export_rdev(sdb2) md: md1: raid array is not clean -- starting background reconstruction md: RAID level 1 does not need chunksize! Continuing anyway. md1: max total readahead window set to 508k md1: 1 data-disks, max readahead per data-disk: 508k raid1: device sda2 operational as mirror 0 raid1: md1, not all disks are operational -- trying to recover array raid1: raid set md1 active with 1 out of 2 mirrors md: updating md1 RAID superblock on device md: sda2 [events: 0000005e]<6>(write) sda2's sb offset: 9438080 md: recovery thread got woken up ... md1: no spare disk to reconstruct array! -- continuing in degraded mode md2: no spare disk to reconstruct array! -- continuing in degraded mode md3: no spare disk to reconstruct array! -- continuing in degraded mode md: recovery thread finished ... [events: 70543253] md: invalid raid superblock magic on md1 md: md1 has invalid sb, not importing! md: no nested md device found md: considering sdb1 ... md: adding sdb1 ... md: adding sda1 ... md: created md0 md: bind<sda1,1> md: bind<sdb1,2> md: running: <sdb1><sda1> md: sdb1's event counter: 0000005f md: sda1's event counter: 00000047 md: superblock update time inconsistency -- using the most recent one md: freshest: sdb1 md: kicking non-fresh sda1 from array! md: unbind<sda1,1> md: export_rdev(sda1) md: md0: raid array is not clean -- starting background reconstruction md: RAID level 1 does not need chunksize! Continuing anyway. md0: max total readahead window set to 508k md0: 1 data-disks, max readahead per data-disk: 508k raid1: device sdb1 operational as mirror 1 raid1: md0, not all disks are operational -- trying to recover array raid1: raid set md0 active with 1 out of 2 mirrors md: updating md0 RAID superblock on device md: sdb1 [events: 00000060]<6>(write) sdb1's sb offset: 3148608 md: recovery thread got woken up ... md0: no spare disk to reconstruct array! -- continuing in degraded mode md1: no spare disk to reconstruct array! -- continuing in degraded mode md2: no spare disk to reconstruct array! -- continuing in degraded mode md3: no spare disk to reconstruct array! -- continuing in degraded mode md: recovery thread finished ... [events: 7631eacb] md: invalid raid superblock magic on md0 md: md0 has invalid sb, not importing! md: no nested md device found md: ... autorun DONE. EXT3-fs: INFO: recovery required on readonly filesystem. EXT3-fs: write access will be enabled during recovery. (recovery.c, 256): journal_recover: JBD: recovery, exit status 0, recovered transactions 1751345 to 1751368 (recovery.c, 258): journal_recover: JBD: Replayed 1226 and revoked 0/0 blocks kjournald starting. Commit interval 5 seconds EXT3-fs: recovery complete. EXT3-fs: mounted filesystem with ordered data mode. VFS: Mounted root (ext3 filesystem) readonly. Trying to move old root to /initrd ... failed Unmounting old root Trying to free ramdisk memory ... okay Freeing unused kernel memory: 160k freed md: array md0 already exists! md: array md1 already exists! md: array md2 already exists! md: array md3 already exists! md: Autodetecting RAID arrays. [events: 00000040] [events: 00000042] [events: 00000040] [events: 00000047] md: autorun ... md: considering sda1 ... md: adding sda1 ... md: md0 already running, cannot run sda1 md: export_rdev(sda1) md: (sda1 was pending) md: considering sdb2 ... md: adding sdb2 ... md: md1 already running, cannot run sdb2 md: export_rdev(sdb2) md: (sdb2 was pending) md: considering sda5 ... md: adding sda5 ... md: md2 already running, cannot run sda5 md: export_rdev(sda5) md: (sda5 was pending) md: considering sdb6 ... md: adding sdb6 ... md: md3 already running, cannot run sdb6 md: export_rdev(sdb6) md: (sdb6 was pending) md: ... autorun DONE.
cu Daniel
Gruß Pit --