[opensuse] No longer a GRUB problem (Was: BIOS/GRUB Problem)
There seems to be a residual problem in my system, though apparently unrelated to the swapping of the HDs that was the subject of the earlier problem. After the corruption in BIOS and /boot was resolved, the last step was to run fsck again on the /home partition, after which the system booted and performed properly. That last step had to be repeated for the next few boot, after which the system sometimes booted normally, and sometimes needed the fsck step. In the latter case, after the herald screen (the chameleon alone), the repair screen appears, reporting an inconsistency, automatic fsck fails, and it asks for fsck to be run manually. After fsck finishes, its report includes (these are numbers from a few days ago): "/dev/sda7/:21727/2564096 files (0.7% non-contiguous 1982750/10241429 blocks". I interpret this as meaning that there are bad blocks on the HD. For a few days, when fsck was needed, in both number pairs the first number is smaller with every boot. This suggested that the filesystem (ext4) was repairing itself gradually by marking the bad blocks. That changed day before yesterday, and the first numbers of the two pairs is now successively larger, so assuming that my interpretation is correct, the HD is going to hell. In fact, one application (a Java program) has become corrupted, and I assume this is a manifestation of the same deterioration. Everything else that I have been using seems so far to be functioning correctly. I think I need to replace the HD and reinstall v11.3 anew. I have backups of most data, with the excption of some of the Kmail message files. These are organized in folders that I can copy from the .kde4 tree. Any advice or comment beyond that will be gratefully received. -- Stan Goodman Qiryat Tiv'on Israel -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On 2010/11/16 14:01 (GMT+0200) Stan Goodman composed:
There seems to be a residual problem in my system, though apparently unrelated to the swapping of the HDs that was the subject of the earlier problem.
After the corruption in BIOS and /boot was resolved, the last step was to run fsck again on the /home partition, after which the system booted and performed properly. That last step had to be repeated for the next few boot, after which the system sometimes booted normally, and sometimes needed the fsck step.
In the latter case, after the herald screen (the chameleon alone), the repair screen appears, reporting an inconsistency, automatic fsck fails, and it asks for fsck to be run manually.
After fsck finishes, its report includes (these are numbers from a few days ago): "/dev/sda7/:21727/2564096 files (0.7% non-contiguous 1982750/10241429 blocks".
I interpret this as meaning that there are bad blocks on the HD.
For a few days, when fsck was needed, in both number pairs the first number is smaller with every boot. This suggested that the filesystem (ext4) was repairing itself gradually by marking the bad blocks. That changed day before yesterday, and the first numbers of the two pairs is now successively larger, so assuming that my interpretation is correct, the HD is going to hell. In fact, one application (a Java program) has become corrupted, and I assume this is a manifestation of the same deterioration. Everything else that I have been using seems so far to be functioning correctly.
I think I need to replace the HD and reinstall v11.3 anew. I have backups of most data, with the excption of some of the Kmail message files. These are organized in folders that I can copy from the .kde4 tree.
Apparently, the present sda HD is now defective in the /home partition. I don't know if I can just make another partition on the same HD, because I have no idea how far the bad region will spread. If it wouild be wiser to abandon this HD altogether, I can remove it and do the reinstallation on the (curently disconnected) HD which was sdb. Or I could just purchase a new HD, which might be simpler.
Before doing much of anything else, run Seagate's diagnostic software on the device to see the bad sector status. Is the Seagate out of warranty already? For several years its HDs had 5 year warranties. Since then, I think they all carry 3 unless purchased as a refurb or as part of an OEM system. One thing to try if it passes the above test, since you have so much freespace available is to create another partition, mkfs it as something other than ext4, copy the entirety of /home to it, umount home, make the new your /home in fstab, and see what happens on successive boots. If nothing bad seems to happen, you might re-mkfs the original, recopy /home content back, change fstab back, and see if it's OK as other than ext4.
If I move to what was sdb, I do NOT want to confuse the BIOS again, and I would need to have advice about that.
With only one SATA HD (and no connected USB storage) in the system the BIOS will not be causing any "confusion", but the way I remember SATA port behavior you can expect Grub to need reinstalling as a (hd0,6) device instead of the (hd1,6) that it was. Why not disconnect the Seagate, connect the Hitachi, and see if Grub will boot 11.1 to find that out? -- "The wise are known for their understanding, and pleasant words are persuasive." Proverbs 16:21 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 Felix Miata *** http://fm.no-ip.com/ -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On 2010/11/16 14:01 (GMT+0200) Stan Goodman composed:
There seems to be a residual problem in my system, though apparently unrelated to the swapping of the HDs that was the subject of the earlier problem.
After the corruption in BIOS and /boot was resolved, the last step was to run fsck again on the /home partition, after which the system booted and performed properly. That last step had to be repeated for the next few boot, after which the system sometimes booted normally, and sometimes needed the fsck step.
In the latter case, after the herald screen (the chameleon alone), the repair screen appears, reporting an inconsistency, automatic fsck fails, and it asks for fsck to be run manually.
After fsck finishes, its report includes (these are numbers from a few days ago): "/dev/sda7/:21727/2564096 files (0.7% non-contiguous 1982750/10241429 blocks".
I interpret this as meaning that there are bad blocks on the HD.
For a few days, when fsck was needed, in both number pairs the first number is smaller with every boot. This suggested that the filesystem (ext4) was repairing itself gradually by marking the bad blocks. That changed day before yesterday, and the first numbers of the two
At 15:26:27 on Tuesday Tuesday 16 November 2010, Felix Miata <mrmazda@earthlink.net> wrote: pairs
is now successively larger, so assuming that my interpretation is correct, the HD is going to hell. In fact, one application (a Java program) has become corrupted, and I assume this is a manifestation of the same deterioration. Everything else that I have been using seems so far to be functioning correctly.
I think I need to replace the HD and reinstall v11.3 anew. I have backups of most data, with the excption of some of the Kmail message files. These are organized in folders that I can copy from the .kde4 tree.
Apparently, the present sda HD is now defective in the /home partition. I don't know if I can just make another partition on the same HD, because I have no idea how far the bad region will spread. If it wouild be wiser to abandon this HD altogether, I can remove it and do the reinstallation on the (curently disconnected) HD which was sdb. Or I could just purchase a new HD, which might be simpler.
Before doing much of anything else, run Seagate's diagnostic software on the device to see the bad sector status. Is the Seagate out of warranty already? For several years its HDs had 5 year warranties. Since then, I think they all carry 3 unless purchased as a refurb or as part of an OEM system.
I'll look for the diagnostic software on the Seagate site. Do you recall the name of the file? If the warranty was for three years, it will be marginal, but the place I bought the drive will know.
One thing to try if it passes the above test, since you have so much freespace available is to create another partition, mkfs it as something other than ext4, copy the entirety of /home to it, umount home, make the new your /home in fstab, and see what happens on successive boots. If nothing bad seems to happen, you might re-mkfs the original, recopy /home content back, change fstab back, and see if it's OK as other than ext4.
Why must it be an fs other than ext4?
If I move to what was sdb, I do NOT want to confuse the BIOS again, and I would need to have advice about that.
With only one SATA HD (and no connected USB storage) in the system the BIOS will not be causing any "confusion", but the way I remember SATA port behavior you can expect Grub to need reinstalling as a (hd0,6) device instead of the (hd1,6) that it was. Why not disconnect the Seagate, connect the Hitachi, and see if Grub will boot 11.1 to find that out?
I asked about confusion because I also mentioned the possibility of connecting the second disk (Hitachi) as well. I'll connect the Hitachi instead of the Seagate as you suggest. I can't think that either of us would be optimistic that it will boot. -- Stan Goodman Qiryat Tiv'on Israel -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On 2010/11/16 16:41 (GMT+0200) Stan Goodman composed:
Felix Miata wrote:
Before doing much of anything else, run Seagate's diagnostic software on the device to see the bad sector status. Is the Seagate out of warranty already? For several years its HDs had 5 year warranties. Since then, I think they all carry 3 unless purchased as a refurb or as part of an OEM system.
I'll look for the diagnostic software on the Seagate site. Do you recall the name of the file?
Used to and may still be Seatools, but I never get it from Seagate. Along with equivalent tools from other brands and much much more it's included on the diagnostic CD everyone should have from http://ultimatebootcd.com/
If the warranty was for three years, it will be marginal, but the place I bought the drive will know.
One thing to try if it passes the above test, since you have so much freespace available is to create another partition, mkfs it as something other than ext4, copy the entirety of /home to it, umount home, make the new your /home in fstab, and see what happens on successive boots. If nothing bad seems to happen, you might re-mkfs the original, recopy /home content back, change fstab back, and see if it's OK as other than ext4.
Why must it be an fs other than ext4?
To rule out ext4 as the problem itself? It's too young for me to use here. I stick with the familiar that all kernels under this roof understand.
With only one SATA HD (and no connected USB storage) in the system the BIOS will not be causing any "confusion", but the way I remember SATA port behavior you can expect Grub to need reinstalling as a (hd0,6) device instead of the (hd1,6) that it was. Why not disconnect the Seagate, connect the Hitachi, and see if Grub will boot 11.1 to find that out?
I asked about confusion because I also mentioned the possibility of connecting the second disk (Hitachi) as well.
I think now that all the basics (other than the fsck problem) are functioning on the Seagate, plugging in the Hitachi should not disrupt anything, as long as the BIOS maintains the Seagate as priority the over Hitachi.
I'll connect the Hitachi instead of the Seagate as you suggest. I can't think that either of us would be optimistic that it will boot.
If you have a floppy with Grub on it you could start it and get booted manually as you had been from HD in recent weeks, then put Grub somewhere other than where it is now (sdb6?) as emergency tool for getting booted while a normal sda is absent. Grub need not be on / or /boot. That's just a convention. As long as you know how to use the Grub shell, all you need is a partition it can recognize and with stage1 installed, and a /boot/grub directory containing stage2, where you could also put another menu.lst. Or, if planning not to replace the Seagate, just go ahead and install 11.3 to new partitions on the Hitachi as sda, and use 11.3's Grub to boot 11.1 if and when you need it. That way you'd be able to choose 11.3's Grub from BM whenever the Hitachi appears as (hd0), and 11.1's Grub from BM whenever the Hitachi appears as (hd1). -- "The wise are known for their understanding, and pleasant words are persuasive." Proverbs 16:21 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 Felix Miata *** http://fm.no-ip.com/ -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
At 19:11:56 on Tuesday Tuesday 16 November 2010, Felix Miata <mrmazda@earthlink.net> wrote:
On 2010/11/16 16:41 (GMT+0200) Stan Goodman composed:
Felix Miata wrote:
Before doing much of anything else, run Seagate's diagnostic software on the device to see the bad sector status. Is the Seagate out of warranty already? For several years its HDs had 5 year warranties. Since then, I think they all carry 3 unless purchased as a refurb or as part of an OEM system.
I'll look for the diagnostic software on the Seagate site. Do you recall the name of the file?
Used to and may still be Seatools, but I never get it from Seagate. Along with equivalent tools from other brands and much much more it's included on the diagnostic CD everyone should have from http://ultimatebootcd.com/
I downloaded a bootable DOS file from Seagate and put it on a mini-CD; tomorrow sometime I'll run it. I'll try to get similar files for the Hitachi and for the Western Digital in the laptop. In the process of burning the Seagate iso, I discovered that K3b isn't working right either. I burned the file in the laptop.
If the warranty was for three years, it will be marginal, but the place I bought the drive will know.
One thing to try if it passes the above test, since you have so much freespace available is to create another partition, mkfs it as something other than ext4, copy the entirety of /home to it, umount home, make the new your /home in fstab, and see what happens on successive boots. If nothing bad seems to happen, you might re-mkfs the original, recopy /home content back, change fstab back, and see if it's OK as other than ext4.
Why must it be an fs other than ext4?
To rule out ext4 as the problem itself? It's too young for me to use here. I stick with the familiar that all kernels under this roof understand.
With only one SATA HD (and no connected USB storage) in the system the BIOS will not be causing any "confusion", but the way I remember SATA port behavior you can expect Grub to need reinstalling as a (hd0,6) device instead of the (hd1,6) that it was. Why not disconnect the Seagate, connect the Hitachi, and see if Grub will boot 11.1 to find that out?
I asked about confusion because I also mentioned the possibility of connecting the second disk (Hitachi) as well.
I think now that all the basics (other than the fsck problem) are functioning on the Seagate, plugging in the Hitachi should not disrupt anything, as long as the BIOS maintains the Seagate as priority the over Hitachi.
I'll connect the Hitachi instead of the Seagate as you suggest. I can't think that either of us would be optimistic that it will boot.
If you have a floppy with Grub on it you could start it and get booted manually as you had been from HD in recent weeks, then put Grub somewhere other than where it is now (sdb6?) as emergency tool for getting booted while a normal sda is absent. Grub need not be on / or /boot. That's just a convention. As long as you know how to use the Grub shell, all you need is a partition it can recognize and with stage1 installed, and a /boot/grub directory containing stage2, where you could also put another menu.lst.
Or, if planning not to replace the Seagate, just go ahead and install 11.3 to new partitions on the Hitachi as sda, and use 11.3's Grub to boot 11.1 if and when you need it. That way you'd be able to choose 11.3's Grub from BM whenever the Hitachi appears as (hd0), and 11.1's Grub from BM whenever the Hitachi appears as (hd1).
-- Stan Goodman Qiryat Tiv'on Israel -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
At 22:55:40 on Tuesday Tuesday 16 November 2010, Stan Goodman <stan.goodman@hashkedim.com> wrote:
At 19:11:56 on Tuesday Tuesday 16 November 2010, Felix Miata
<mrmazda@earthlink.net> wrote:
On 2010/11/16 16:41 (GMT+0200) Stan Goodman composed:
Felix Miata wrote:
Before doing much of anything else, run Seagate's diagnostic software on the device to see the bad sector status. Is the Seagate out of warranty already? For several years its HDs had 5 year warranties. Since then, I think they all carry 3 unless purchased as a refurb or as part of an OEM system.
I'll look for the diagnostic software on the Seagate site. Do you recall the name of the file?
Used to and may still be Seatools, but I never get it from Seagate. Along with equivalent tools from other brands and much much more
it's
included on the diagnostic CD everyone should have from http://ultimatebootcd.com/
I downloaded a bootable DOS file from Seagate and put it on a mini- CD; tomorrow sometime I'll run it. I'll try to get similar files for the Hitachi and for the Western Digital in the laptop.
In the process of burning the Seagate iso, I discovered that K3b isn't working right either. I burned the file in the laptop.
If the warranty was for three years, it will be marginal, but the place I bought the drive will know.
One thing to try if it passes the above test, since you have so much freespace available is to create another partition, mkfs it as something other than ext4, copy the entirety of /home to it, umount home, make the new your /home in fstab, and see what happens on successive boots. If nothing bad seems to happen,
you
might re-mkfs the original, recopy /home content back, change fstab back, and see if it's OK as other than ext4.
Why must it be an fs other than ext4?
To rule out ext4 as the problem itself? It's too young for me to use here. I stick with the familiar that all kernels under this roof understand.
With only one SATA HD (and no connected USB storage) in
system the BIOS will not be causing any "confusion", but the way I remember SATA port behavior you can expect Grub to need reinstalling as a (hd0,6) device instead of the (hd1,6) that it was. Why not disconnect the Seagate, connect the Hitachi, and
see
if Grub will boot 11.1 to find that out?
I asked about confusion because I also mentioned the possibility of connecting the second disk (Hitachi) as well.
I think now that all the basics (other than the fsck problem) are functioning on the Seagate, plugging in the Hitachi should not disrupt anything, as long as the BIOS maintains the Seagate as priority the over Hitachi.
I'll connect the Hitachi instead of the Seagate as you suggest. I can't think that either of us would be optimistic that it will boot.
If you have a floppy with Grub on it you could start it and get booted manually as you had been from HD in recent weeks, then
With only the Seagate disk connected,I ran the Seagate diagnostic disk a little while ago (the Long test). The disk passed with flying colors, so it does seem that Carlos's explanation for the problem may be close to the truth -- too many non-contiguous files. I find that surprising, given the fact that this is a new installation, made on formatter partitions. After installation was complete, I copied much data from the Documents folder of v11.1, as well as PIM data from ~/.kde. If the OS had to store fragments of those files here and there, leaving a lot of non-contiguous files, I don't know what I could have done to prevent that from happening. But there are two things make me question this scenario: 1) the file system (ext4) is journaled, which is supposed to prevent that, and 2) the reports from running fsck almost daily show that non-contiguous files have always been between 0.5 - 0.7% of the total -- never even close to one percent of the total. If that is too much (how would I know?), that might mean that ext4 isn't as zealous about defragmentation as it might be the put
Grub somewhere other than where it is now (sdb6?) as emergency tool for getting booted while a normal sda is absent. Grub need not be on / or /boot. That's just a convention. As long as you know how to use the Grub shell, all you need is a partition it can recognize and with stage1 installed, and a /boot/grub directory containing stage2, where you could also put another menu.lst.
Or, if planning not to replace the Seagate, just go ahead and install 11.3 to new partitions on the Hitachi as sda, and use 11.3's Grub to boot 11.1 if and when you need it. That way you'd be able to choose 11.3's Grub from BM whenever the Hitachi appears as (hd0), and
11.1's
Grub from BM whenever the Hitachi appears as (hd1).
-- Stan Goodman Qiryat Tiv'on Israel -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Tuesday, 2010-11-16 at 14:01 +0200, Stan Goodman wrote:
After fsck finishes, its report includes (these are numbers from a few days ago): "/dev/sda7/:21727/2564096 files (0.7% non-contiguous 1982750/10241429 blocks".
I interpret this as meaning that there are bad blocks on the HD.
No. - From the output above, no. If there is more, perhaps. - -- Cheers, Carlos E. R. (from 11.2 x86_64 "Emerald" at Telcontar) -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.12 (GNU/Linux) iEYEARECAAYFAkzijGoACgkQtTMYHG2NR9WPlACfWEvv2GzHY/BDEeEl7dP+/THD GoQAoIURwkJrCcPTsao2LdVkZbKlHQSd =cNOg -----END PGP SIGNATURE----- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
At 15:51:37 on Tuesday Tuesday 16 November 2010, "Carlos E. R." <robin.listas@telefonica.net> wrote:
On Tuesday, 2010-11-16 at 14:01 +0200, Stan Goodman wrote:
After fsck finishes, its report includes (these are numbers from a few days ago): "/dev/sda7/:21727/2564096 files (0.7% non-contiguous 1982750/10241429 blocks".
I interpret this as meaning that there are bad blocks on the HD.
No.
From the output above, no. If there is more, perhaps.
There is nothing else that indicates a problem. When I have run fsck a second time without rebooting, it has always said that the file system is clean. If "No", what is it telling me? -- Stan Goodman Qiryat Tiv'on Israel -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Tuesday, 2010-11-16 at 16:13 +0200, Stan Goodman wrote:
If "No", what is it telling me?
Simply so many files out of so many possible (of which so may are non contiguous), and so many blocks out of so many possible. :-) - -- Cheers, Carlos E. R. (from 11.2 x86_64 "Emerald" at Telcontar) -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.12 (GNU/Linux) iEYEARECAAYFAkzjHLEACgkQtTMYHG2NR9Xr2wCglgJkN4mZkEyn2Hk8NLt00GCv JYUAn2MCU0Cm4gyS2hTiWTe1V9sS7ObS =Dl93 -----END PGP SIGNATURE----- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
Hello, On Tue, 16 Nov 2010, Stan Goodman wrote:
After fsck finishes, its report includes (these are numbers from a few days ago): "/dev/sda7/:21727/2564096 files (0.7% non-contiguous 1982750/10241429 blocks".
I interpret this as meaning that there are bad blocks on the HD.
Those numbers have nothing whatsoever to do with bad blocks. An ext2/3/4 filesystem has a fixed number of inodes, determined on creation. Each file (regular file, directory, symlink, socked, device, named pipe) uses one inode. Hardlinks do not need additional inodes, just some space in the directory. So, the first pair of numbers tells you that currently there are 21727 inodes out of 2564096 inodes in use. Compare with the output of 'df -i /dev/sda7'. Of those 21727 files, there are 0.7% (or about 152 files) not contiguously on the disk, the file has two or more "pieces". E.g., with each '=' or '-' being one block (usually 4 KiB) whatever file_one file_two [..] Rest_of_file_one next_file ---------========-----------[..]-================----------- The sequences of blocks '=' where file_one "resides" on the disk is not in one piece, it is 'non-contiguous'. The second pair of numbers tells you how many blocks the filesystem has (10241429) and how many of those are in use by files (1982750). Compare with the output of 'df /dev/sda7', but you'll need to adjust the block-size. Assuming 4KiB: df -B 4096 /dev/sda7 You can find out how large your blocks are (and a lot more) by using: tune2fs -l /dev/sda7 or just for the blocksize: tune2fs -l /dev/sda7 | grep -i Block.size If you want to find out if your disk has bad blocks, run smartctl -s on /dev/sda smartctl -A /dev/sda as root. The relevant attributes are 5, 196, 197 and 198. Use e.g.: smartctl -A /dev/sda | \ awk '$1 ~ /^(5|19[678])/ { if( $NF != 0 ) { print FILENAME " " $2 "\t" $NF; } }' to filter for those attributes. HTH, -dnh -- The social dynamics of the net are a direct consequence of the fact that nobody has yet developed a Remote Strangulation Protocol. -- Larry Wall -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
At 02:20:32 on Thursday Thursday 18 November 2010, David Haller <dnh@opensuse.org> wrote:
Hello,
On Tue, 16 Nov 2010, Stan Goodman wrote:
After fsck finishes, its report includes (these are numbers from a few days ago): "/dev/sda7/:21727/2564096 files (0.7% non-contiguous 1982750/10241429 blocks".
I interpret this as meaning that there are bad blocks on the HD.
Those numbers have nothing whatsoever to do with bad blocks.
Bad blocks was my guess (I had not found any explanation of what the numbers mean). Carlos quickly told me that my guess was wrong, but your detailed explanation is much appreciated. The diagnostic test on the disk in question, which found no defects.
An ext2/3/4 filesystem has a fixed number of inodes, determined on creation. Each file (regular file, directory, symlink, socked, device, named pipe) uses one inode. Hardlinks do not need additional inodes, just some space in the directory. So, the first pair of numbers tells you that currently there are 21727 inodes out of 2564096 inodes in use. Compare with the output of 'df -i /dev/sda7'...
Very clear. It wouldn't have been a lot of trouble for the output of fsck to provide that level of labeling.
Of those 21727 files, there are 0.7% (or about 152 files) not contiguously on the disk, the file has two or more "pieces". E.g., with each '=' or '-' being one block (usually 4 KiB)
The designation "non-contiguous" is very clear.
whatever file_one file_two [..] Rest_of_file_one next_file ---------========-----------[..]-================-----------
The sequences of blocks '=' where file_one "resides" on the disk is not in one piece, it is 'non-contiguous'.
The second pair of numbers tells you how many blocks the filesystem has (10241429) and how many of those are in use by files (1982750). Compare with the output of 'df /dev/sda7', but you'll need to adjust the block-size. Assuming 4KiB:
df -B 4096 /dev/sda7
You can find out how large your blocks are (and a lot more) by using:
tune2fs -l /dev/sda7
***** # tune2fs -l /dev/sda7 tune2fs 1.41.11 (14-Mar-2010) Filesystem volume name: <none> Last mounted on: /home Filesystem UUID: 681b8f76-77c1-44b9-9c89-ed9027b2a4c2 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 2564096 Block count: 10241429 Reserved block count: 512071 Free blocks: 8254037 Free inodes: 2541547 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 1021 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 Flex block group size: 16 Filesystem created: Mon Sep 13 22:40:36 2010 Last mount time: Sun Jul 4 03:00:08 2010 Last write time: Sun Jul 4 03:00:08 2010 Mount count: 1 Maximum mount count: -1 Last checked: Sun Jul 4 03:01:09 2010 Check interval: 0 (<none>) Lifetime writes: 25 GB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: 7beec659-fa23-417b-804d-254d47341e59 Journal backup: inode blocks ***** Interesting that the file system was most recently mounted and written two months before it was created. But I do not see that it is running out of inodes (under 1%) or blocks (20%); why is it complaining?
or just for the blocksize:
tune2fs -l /dev/sda7 | grep -i Block.size
If you want to find out if your disk has bad blocks, run
smartctl -s on /dev/sda smartctl -A /dev/sda
***** # smartctl -A /dev/sda smartctl 5.39.1 2010-01-28 r3054 [x86_64-unknown-linux-gnu] (openSUSE RPM) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net === START OF READ SMART DATA SECTION === SMART Attributes Data Structure revision number: 10 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 119 099 006 Pre-fail Always - 226329685 3 Spin_Up_Time 0x0003 097 097 000 Pre-fail Always - 0 4 Start_Stop_Count 0x0032 100 100 020 Old_age Always - 816 5 Reallocated_Sector_Ct 0x0033 100 100 036 Pre-fail Always - 0 7 Seek_Error_Rate 0x000f 064 060 030 Pre-fail Always - 3056198 9 Power_On_Hours 0x0032 098 098 000 Old_age Always - 2032 10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 0 12 Power_Cycle_Count 0x0032 100 100 020 Old_age Always - 408 183 Runtime_Bad_Block 0x0032 100 100 000 Old_age Always - 0 184 End-to-End_Error 0x0032 100 100 099 Old_age Always - 0 187 Reported_Uncorrect 0x0032 100 100 000 Old_age Always - 0 188 Command_Timeout 0x0032 100 094 000 Old_age Always - 150 189 High_Fly_Writes 0x003a 100 100 000 Old_age Always - 0 190 Airflow_Temperature_Cel 0x0022 072 051 045 Old_age Always - 28 (Lifetime Min/Max 21/28) 194 Temperature_Celsius 0x0022 028 049 000 Old_age Always - 28 (0 20 0 0) 195 Hardware_ECC_Recovered 0x001a 047 019 000 Old_age Always - 226329685 197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 0 240 Head_Flying_Hours 0x0000 100 253 000 Old_age Offline - 32388348382196 241 Total_LBAs_Written 0x0000 100 253 000 Old_age Offline - 163897792 242 Total_LBAs_Read 0x0000 100 253 000 Old_age Offline - 1026609259 *****
as root. The relevant attributes are 5, 196, 197 and 198. Use e.g.:
196 is missing from the output. If I am reading this correctly, the disk is on its last legs. This is not an old disk (less than a year, much less than I remembered). Am I misinterpreting it?
smartctl -A /dev/sda | \ awk '$1 ~ /^(5|19[678])/ { if( $NF != 0 ) { print FILENAME " " $2 "\t" $NF; } }'
to filter for those attributes.
HTH,
Many thanks... -- Stan Goodman Qiryat Tiv'on Israel -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
Hello, On Thu, 18 Nov 2010, Stan Goodman wrote:
At 02:20:32 on Thursday Thursday 18 November 2010, David Haller <dnh@opensuse.org> wrote:
On Tue, 16 Nov 2010, Stan Goodman wrote:
After fsck finishes, its report includes (these are numbers from a few days ago): "/dev/sda7/:21727/2564096 files (0.7% non-contiguous 1982750/10241429 blocks". [..] You can find out how large your blocks are (and a lot more) by using:
tune2fs -l /dev/sda7
***** # tune2fs -l /dev/sda7 Errors behavior: Continue
I like to use 'remount-ro' here. That way I have a chance to notice problems. Use 'tune2fs -e remount-ro /dev/sda7' to adjust (see 'man tune2fs').
Filesystem OS type: Linux Inode count: 2564096
Second number from the first group (number of files)
Block count: 10241429
Second number from the second group (number of blocks)
Free blocks: 8254037
10241429 - 8254037 = 1987392 (number of used blocks)
Free inodes: 2541547
2564096 - 2541547 = 22549
Block size: 4096
For use with 'df -B'.
Filesystem created: Mon Sep 13 22:40:36 2010 Last mount time: Sun Jul 4 03:00:08 2010 Last write time: Sun Jul 4 03:00:08 2010 Mount count: 1 Maximum mount count: -1 Last checked: Sun Jul 4 03:01:09 2010 [..] Interesting that the file system was most recently mounted and written two months before it was created.
Is your time set correctly? Or more specifically: the hwclock in the BIOS? I'm not sure at what time that is read to the system-time during boot.
But I do not see that it is running out of inodes (under 1%) or blocks (20%); why is it complaining?
It is not. It's just an informational status message at the end of any fsck.
smartctl -s on /dev/sda smartctl -A /dev/sda
***** # smartctl -A /dev/sda 1 Raw_Read_Error_Rate 226329685
That's usually nothing to worry about, depends on the drive though. Some drives raise that in normal operations, more or less continuously. Other drives usually have this staying at 0.
5 Reallocated_Sector_Ct 0
ok.
7 Seek_Error_Rate 3056198
Also depending on the drive. I see this being "high" with the 2 Seagates, and not with the Samsungs.
10 Spin_Retry_Count 0
Ok (not bad blocks related, but mechanical).
183 Runtime_Bad_Block 0 184 End-to-End_Error 0 187 Reported_Uncorrect 0
ok.
195 Hardware_ECC_Recovered 226329685
Usually nothing to worry about (depends on the drive).
197 Current_Pending_Sector 0 198 Offline_Uncorrectable 0
Ok (those and Nr. 5 above are the "bad blocks" relevant ;)
240 Head_Flying_Hours 32388348382196
Obviously bogus.
196 is missing from the output.
It's drive dependant. Probably equivalent to 183 or 187. See http://en.wikipedia.org/wiki/S.M.A.R.T#ATA_S.M.A.R.T._attributes
If I am reading this correctly, the disk is on its last legs. This is not an old disk (less than a year, much less than I remembered). Am I misinterpreting it?
Yes. The disk is probably fine. You might want to run 'smartd' though to have an eye on the disk (in this box, I have only one drive with similar high numbers (with Attributes 1, 7 or 195), and that one also has some bad sectors, but AFAIR I have some drives in the other box, that have similar numbers as you and those drives are fine). HTH, -dnh -- I can't see a conspicuous evolutionary advantage in being good at higher mathematics. -- James Riden -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
At 23:20:39 on Thursday Thursday 18 November 2010, David Haller <dnh@opensuse.org> wrote:
Hello,
On Thu, 18 Nov 2010, Stan Goodman wrote:
At 02:20:32 on Thursday Thursday 18 November 2010, David Haller
<dnh@opensuse.org> wrote:
On Tue, 16 Nov 2010, Stan Goodman wrote:
After fsck finishes, its report includes (these are numbers from a few days ago): "/dev/sda7/:21727/2564096 files (0.7% non-contiguous 1982750/10241429 blocks".
[..]
You can find out how large your blocks are (and a lot more) by using: tune2fs -l /dev/sda7
***** # tune2fs -l /dev/sda7 Errors behavior: Continue
I like to use 'remount-ro' here. That way I have a chance to notice problems. Use 'tune2fs -e remount-ro /dev/sda7' to adjust (see 'man tune2fs').
Filesystem OS type: Linux Inode count: 2564096
Second number from the first group (number of files)
Block count: 10241429
Second number from the second group (number of blocks)
Free blocks: 8254037
10241429 - 8254037 = 1987392 (number of used blocks)
Free inodes: 2541547
2564096 - 2541547 = 22549
Block size: 4096
For use with 'df -B'.
Filesystem created: Mon Sep 13 22:40:36 2010 Last mount time: Sun Jul 4 03:00:08 2010 Last write time: Sun Jul 4 03:00:08 2010 Mount count: 1 Maximum mount count: -1 Last checked: Sun Jul 4 03:01:09 2010
[..]
Interesting that the file system was most recently mounted and written two months before it was created.
Is your time set correctly? Or more specifically: the hwclock in the BIOS? I'm not sure at what time that is read to the system-time during boot.
No, it isn't, as I have mentioned some time ago. When normal booting fails, the BIOS time is displayed (along with a notice that it is wrong) as 2002/1/12. When the system is up, the time is correctly displayed because NTS has corrected it.
But I do not see that it is running out of inodes (under 1%) or blocks (20%); why is it complaining?
It is not. It's just an informational status message at the end of any fsck.
smartctl -s on /dev/sda smartctl -A /dev/sda
***** # smartctl -A /dev/sda
1 Raw_Read_Error_Rate 226329685
That's usually nothing to worry about, depends on the drive though. Some drives raise that in normal operations, more or less continuously. Other drives usually have this staying at 0.
5 Reallocated_Sector_Ct 0
ok.
7 Seek_Error_Rate 3056198
Also depending on the drive. I see this being "high" with the 2 Seagates, and not with the Samsungs.
10 Spin_Retry_Count 0
Ok (not bad blocks related, but mechanical).
183 Runtime_Bad_Block 0 184 End-to-End_Error 0 187 Reported_Uncorrect 0
ok.
195 Hardware_ECC_Recovered 226329685
Usually nothing to worry about (depends on the drive).
197 Current_Pending_Sector 0 198 Offline_Uncorrectable 0
Ok (those and Nr. 5 above are the "bad blocks" relevant ;)
240 Head_Flying_Hours 32388348382196
Obviously bogus.
It antedates the Big Bang.
196 is missing from the output.
It's drive dependant. Probably equivalent to 183 or 187.
See http://en.wikipedia.org/wiki/S.M.A.R.T#ATA_S.M.A.R.T._attributes
If I am reading this correctly, the disk is on its last legs. This is not an old disk (less than a year, much less than I remembered). Am I misinterpreting it?
Yes. The disk is probably fine. You might want to run 'smartd' though to have an eye on the disk (in this box, I have only one drive with similar high numbers (with Attributes 1, 7 or 195), and that one also has some bad sectors, but AFAIR I have some drives in the other box, that have similar numbers as you and those drives are fine).
HTH, -dnh
I can't see a conspicuous evolutionary advantage in being good at higher mathematics. -- James Riden
Why... That would imply that rock singers and basketball players would have more reproductive opportunities than mathematicians! Impossible to believe. -- Stan Goodman Qiryat Tiv'on Israel -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On 2010/11/19 00:13 (GMT+0200) Stan Goodman composed:
David Haller wrote:
Interesting that the file system was most recently mounted and written two months before it was created.
Is your time set correctly? Or more specifically: the hwclock in the BIOS? I'm not sure at what time that is read to the system-time during boot.
No, it isn't, as I have mentioned some time ago. When normal booting fails, the BIOS time is displayed (along with a notice that it is wrong) as 2002/1/12. When the system is up, the time is correctly displayed because NTS has corrected it.
If the BIOS cannot retain the time, likely it cannot retain uncorrupted other settings either. You need to verify the clock can keep time while powered down overnight or longer, by going into the BIOS to confirm correct time right before powering down for an extended period. After shutdown, unplug the power cable. If it cannot keep time for several hours or more, more than likely there's a common 2032 pancake battery on the motherboard that isn't making good contact, or needs to be replaced. Bad battery & resultant BIOS data corruption and/or memory loss could explain the installation and Grub trouble you experienced in recent weeks. -- "The wise are known for their understanding, and pleasant words are persuasive." Proverbs 16:21 (New Living Translation) Team OS/2 ** Reg. Linux User #211409 Felix Miata *** http://fm.no-ip.com/ -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
participants (4)
-
Carlos E. R.
-
David Haller
-
Felix Miata
-
Stan Goodman