Is the hard disk full? Long post.
Hello all: I have the following problem with my suse 10.0 OSS with kde 3. 4. I wanted to install my own freedb database. For this purpose I created a separate partition with 10 GB size. I exctracted the tar archive to this partition. The extraction took a really long time, for hours so I left the machine alone. After the extraction has finished tar gave some error messages that could not extract everything since no space left on the drive. However df -h showed this: df -h Filesystem Size Used Avail Use% Mounted on /dev/sda4 9.2G 4.9G 3.9G 56% /home/user/sda4 After rebooting the computer I found that my lilo was corrupted that I had to fix using the rescue boot disk. When I was able login in again I wanted to create new directories on the abovementioned drive but I got this error message: mkdir sda4/temp mkdir: cannot create directory `sda4/temp': No space left on device However df -h shows the same as above: df -h Filesystem Size Used Avail Use% Mounted on /dev/sda4 9.2G 4.9G 3.9G 56% /home/user/sda4 When I run fsck.ext3 (with /dev/sda4 unmounted) it does not report any error: fsck.ext3 /dev/sda4 e2fsck 1.38 (30-Jun-2005) /dev/sda4: clean, 1221600/1221600 files, 1301070/2441880 blocks Now I don't know what the problem is and how I could fix it. Any help would be appreciated. Thanks, IG ___________________________________________________________________________ [origo] klikkbank lakossági számlacsomag havi 199 Ft-ért, bankkártya éves díj nélkül! www.klikkbank.hu
Istvan Gabor wrote:
After the extraction has finished tar gave some error messages that could not extract everything since no space left on the drive. However df -h showed this: df -h Filesystem Size Used Avail Use% Mounted on /dev/sda4 9.2G 4.9G 3.9G 56% /home/user/sda4
What does "du -hs /home/user/sda4" say?
Now I don't know what the problem is and how I could fix it.
Does the freedb tar.bz2 file really expand from 450Mb to 10Gb ? /Per Jessen, Zürich -- http://www.spamchek.ch/ - professionelle Emailsicherheitslösung. Jetzt für 30 Tage ausprobieren - kostenlos und unverbindlich!
On Saturday 25 February 2006 05:29 am, Per Jessen wrote:
Istvan Gabor wrote:
After the extraction has finished tar gave some error messages that could not extract everything since no space left on the drive. However df -h showed this: df -h Filesystem Size Used Avail Use% Mounted on /dev/sda4 9.2G 4.9G 3.9G 56% /home/user/sda4
What does "du -hs /home/user/sda4" say?
Now I don't know what the problem is and how I could fix it.
Does the freedb tar.bz2 file really expand from 450Mb to 10Gb ?
Yesterday I hit a similar problem. It looked as if my / partition was completely filled. My problem lay in the /tmp directory. I found a few strange large iso files apparently from some backup files I made.. Deleting them gave back some 43GB of space. I believe the real fault lies in not clearing out the /tmp directory periodically since I never turn off the machine. In the file /etc/sysconfig/cron I changedthe line: MAX_DAYS_IN_TMP="0" to MAX_DAYS_IN_TMP="14" to clear out the unused files periodically. Maybe one of the real Gurus will tell us how to best do this. Richard
Hello:
Yesterday I hit a similar problem. It looked as if my / partition was completely filled. My problem lay in the /tmp directory. I found a few strange large iso files apparently from some backup files I made.. Deleting them gave back some 43GB of space.
I have no tmp directory on that partition only lost+found and the uncompressed/untarred freedb database files. See: du -h --max-depth=1 sda4 16K sda4/lost+found 4.7G sda4/freedb 4.7G sda4 /dev/sda4 is a separated partition mounted under sda4 in my home directory. UID and GID is set to my UID and GID. Something else must be wrong. Thanks for your reply, IG ___________________________________________________________________________ [origo] klikkbank lakossági számlacsomag havi 199 Ft-ért, bankkártya éves díj nélkül! www.klikkbank.hu
On Saturday 25 February 2006 07:50, Richard Atcheson wrote:
Yesterday I hit a similar problem. It looked as if my / partition was completely filled. My problem lay in the /tmp directory. I found a few strange large iso files apparently from some backup files I made.. Deleting them gave back some 43GB of space.
I believe the real fault lies in not clearing out the /tmp directory periodically since I never turn off the machine. In the file /etc/sysconfig/cron I changedthe line: MAX_DAYS_IN_TMP="0" to MAX_DAYS_IN_TMP="14" to clear out the unused files periodically. Maybe one of the real Gurus will tell us how to best do this.
Hi everybody! I got 'bit' by this one, too, in 10.0. I'm still running my 'backup' 9.3 installation as I haven't found the time to reinstall. That was a *nasty* crash <shudder>. In YaST's /etc/sysconfig editor: System>Cron TMP_DIRS_TO_CLEAR (I have /tmp and /var/tmp) and CLEAR_TMP_DIRS_AT_BOOT (mine is set to 'yes') hth & regards, Carl
Thanks for the reply.
However df -h showed this: df -h Filesystem Size Used Avail Use% Mounted on /dev/sda4 9.2G 4.9G 3.9G 56% /home/user/ sda4
What does "du -hs /home/user/sda4" say?
du -sh reports: du -sh sda4 4.7G sda4 du -h --max-depth=1 sda4 16K sda4/lost+found 4.7G sda4/freedb 4.7G sda4
Does the freedb tar.bz2 file really expand from 450Mb to 10Gb ?
I don't know what the size is of the completely uncompressed tree. I used the 370 MB rar archive that resulted in a 3.2 GB tar archive after extraction. As I know tar doesn't compress so I suppose that the final also should be around 3.2 GB. Am I right or wrong? IG ___________________________________________________________________________ Pénzügyi szolgáltatás és hiteligénylés interneten keresztül a nap 24 órájában az [origo]-n. http://www.klikkbank.hu
Istvan Gabor wrote:
Does the freedb tar.bz2 file really expand from 450Mb to 10Gb ?
I don't know what the size is of the completely uncompressed tree. I used the 370 MB rar archive that resulted in a 3.2 GB tar archive after extraction. As I know tar doesn't compress so I suppose that the final also should be around 3.2 GB. Am I right or wrong?
You're right - tar doesn't compress. I've just uncompressed the freedb tar.bz2 archive and it comes to about 3Gb too. But it only took 16 mins - I think something went wrong in your decompression. Somewhow. I would reformat that partition and start over. /Per Jessen, Zürich -- http://www.spamchek.com/ - managed anti-spam and anti-virus solution. Let us analyse your spam- and virus-threat - up to 2 months for free.
Hi Istvan: Saw this yesterday, didn't have time to respond. But ... On Saturday 25 February 2006 07:37, Istvan Gabor wrote: ...
I used the 370 MB rar archive that resulted in a 3.2 GB tar archive after extraction. As I know tar doesn't compress so I suppose that the final also should be around 3.2 GB. Am I right or wrong?
You are right that tar does not compress, but there is something else going on here (other than inodes). ext2/3 are conventional filesystems in the way they allocate space to files: the minimum "chunk" of space that can be allocated to a file is a cluster, and a cluster can only be allocated to one file at a time. If your cluster size is 2048, a 1-byte file takes one cluster and a 2049 byte file takes 2 clusters. The overhead in both cases is the same (2047 bytes), but the _percentage_ overhead in the first case is much higher. So if your tar file contains a lot of ~200 byte files and your cluster size is 2K, you will get an overhead of ~900% for those files (not counting inodes), which is in line with what you reported. [Note: all this unallocated space at the end of clusters is not overwritten; it just contains whatever was there before the cluster was unallocated when some other file was deleted. This is one of the places that forensic recovery tools look for data.] Reiserfs takes a different approach: it "stuffs" the ends of clusters, so it is much more efficient in its use of available diskspace for small files. This explains some of the benefit that you saw. Warm regards, Robert
On Wed, 2006-03-01 at 09:56 -0600, Robert Morrison wrote:
Hi Istvan:
Saw this yesterday, didn't have time to respond. But ...
On Saturday 25 February 2006 07:37, Istvan Gabor wrote:
...
I used the 370 MB rar archive that resulted in a 3.2 GB tar archive after extraction. As I know tar doesn't compress so I suppose that the final also should be around 3.2 GB. Am I right or wrong?
You are right that tar does not compress,
Not by default but tar does include the ability to compress data as it is creating the tar file. tar --help will give you more info. -- Ken Schneider UNIX since 1989, linux since 1994, SuSE since 1998
You are right that tar does not compress, but there is something else going on here (other than inodes). ext2/3 are
[snip]
Warm regards, Robert
Thanks for the detailed explanation! IG ___________________________________________________________________________ [origo] klikkbank lakossági számlacsomag havi 199 Ft-ért, bankkártya éves díj nélkül! www.klikkbank.hu
On Saturday 25 February 2006 01:08, Istvan Gabor wrote:
Hello all:
I have the following problem with my suse 10.0 OSS with kde 3. 4.
I wanted to install my own freedb database. For this purpose I created a separate partition with 10 GB size. I exctracted the tar archive to this partition. The extraction took a really long time, for hours so I left the machine alone. After the extraction has finished tar gave some error messages that could not extract everything since no space left on the drive. However df -h showed this: df -h Filesystem Size Used Avail Use% Mounted on /dev/sda4 9.2G 4.9G 3.9G 56% /home/user/sda4
After rebooting the computer I found that my lilo was corrupted that I had to fix using the rescue boot disk. When I was able login in again I wanted to create new directories on the abovementioned drive but I got this error message:
mkdir sda4/temp mkdir: cannot create directory `sda4/temp': No space left on device
However df -h shows the same as above: df -h Filesystem Size Used Avail Use% Mounted on /dev/sda4 9.2G 4.9G 3.9G 56% /home/user/sda4
When I run fsck.ext3 (with /dev/sda4 unmounted) it does not report any error: fsck.ext3 /dev/sda4 e2fsck 1.38 (30-Jun-2005) /dev/sda4: clean, 1221600/1221600 files, 1301070/2441880 blocks
Now I don't know what the problem is and how I could fix it. Any help would be appreciated. Thanks, IG
Hello IG, I just had the same problem a few days ago. I have a separate partition for home and backup. For some reason they didn't mount automatically on reboot. So when I ran rsync to update my system everything was backed up to my root partition. It is small and filled up completely. I couldn't login graphically. I managed to login on the command line. I used this command: du -hsx /* to find out where on my root partition the unwanted data was and I removed it. Them I mounted the home and backup partitions and everything is fine. I hope this helps, Jerome
but I got this error message:
mkdir sda4/temp mkdir: cannot create directory `sda4/temp': No space left on However df -h shows the same as above: df -h Filesystem Size Used Avail Use% Mounted on /dev/sda4 9.2G 4.9G 3.9G 56% /home/user/sda4
Now I don't know what the problem is and how I could fix it.
I've asked a friend who knows linux better. He suggested checking if there is enough free inodes in the file system. As the freedb database contains zillions of small files it can happen that the number of files greater than the inodes available on the filesystem even if the physical size of all the files is less than the free space on the partition. If the filesystem runs short of free inodes new files can't be created, similarly to when the partition is full. I checked the inodes: df -i Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sda4 1221600 1221600 0 100% /home/user/ sda4 As you can see it happened exactly what he supposed. All the inodes are used. That caused the problem. 10 GB place should be enough nevertheless. The question now is how I can set up the partiton so that the same size would have more inodes. To Per Jessen: Uncompressing the bz2 file does not take a long time. But untaring takes hours. I have untared the 3.2 GB tar file again on a 20 GB partiton. It took 20 hours to finish on my system (AMD sempron 2200+, 512 MB RAM, SATA1 HD). The final space that the whole stuff occupies is 7.5 GB. Thank everybody for helping! IG ___________________________________________________________________________ [origo] klikkbank lakossági számlacsomag havi 199 Ft-ért, bankkártya éves díj nélkül! www.klikkbank.hu
Istvan, On Sunday 26 February 2006 14:55, Istvan Gabor wrote:
...
As you can see it happened exactly what he supposed. All the inodes are used. That caused the problem. 10 GB place should be enough nevertheless. The question now is how I can set up the partiton so that the same size would have more inodes.
When you create the file system using one of the "mkfs" commands (each different file system format has its own formatting program) you can give options that control the allocation and limits. It can get fairly complicated for some file systems. You'll have to read the manual page specific mkfs command you'll be using: % apropos mkfs mkfs (8) - build a Linux file system mkfs.xfs (8) - construct an XFS filesystem jfs_mkfs (8) - create a JFS formatted partition mkfs.jfs (8) - create a JFS formatted partition mkfs.ext2 (8) - create an ext2/ext3 filesystem mkfs.ext3 (8) - create an ext2/ext3 filesystem mkfs.vfat (8) - create an MS-DOS file system under Linux mkfs.msdos (8) - create an MS-DOS file system under Linux mkfs.minix (8) - make a Linux MINIX filesystem mkfs.bfs (8) - make an SCO bfs filesystem mkfs.reiserfs (8) - The create tool for the Linux ReiserFS filesystem. Good luck!
...
Randall Schulz
Istvan Gabor wrote:
I checked the inodes:
df -i Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sda4 1221600 1221600 0 100% /home/user/ sda4
As you can see it happened exactly what he supposed. All the inodes are used. That caused the problem.
Interesting situation - I've never experienced that.
10 GB place should be enough nevertheless. The question now is how I can set up the partiton so that the same size would have more inodes.
Some filesystems have options for specifying that on e.g. mkfs.xxx, but not all. mk2efs has a "-N number-of-inodes" option for instance. /Per Jessen, Zürich -- http://www.spamchek.com/ - managed anti-spam and anti-virus solution. Let us analyse your spam- and virus-threat - up to 2 months for free.
On Mon, 2006-02-27 at 09:44 +0100, Per Jessen wrote:
Istvan Gabor wrote:
I checked the inodes:
df -i Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sda4 1221600 1221600 0 100% /home/user/ sda4
As you can see it happened exactly what he supposed. All the inodes are used. That caused the problem.
Interesting situation - I've never experienced that.
10 GB place should be enough nevertheless. The question now is how I can set up the partiton so that the same size would have more inodes.
Some filesystems have options for specifying that on e.g. mkfs.xxx, but not all. mk2efs has a "-N number-of-inodes" option for instance.
Is this peculiar to the sd? devices? I opened a shell and got this on my reiser file systems (and one vfat): linux:/home/Mike # df -i Filesystem Inodes IUsed IFree IUse% Mounted on /dev/hdc2 0 0 0 - / tmpfs 96988 2 96986 1% /dev/shm /dev/hdc3 0 0 0 - /home /dev/hdc4 0 0 0 - /mnt/Linux /dev/hda1 0 0 0 - /mnt/ME
Mike McMullin wrote:
Is this peculiar to the sd? devices? I opened a shell and got this on my reiser file systems (and one vfat):
linux:/home/Mike # df -i Filesystem Inodes IUsed IFree IUse% Mounted on /dev/hdc2 0 0 0 - / tmpfs 96988 2 96986 1% /dev/shm /dev/hdc3 0 0 0 - /home /dev/hdc4 0 0 0 - /mnt/Linux /dev/hda1 0 0 0 - /mnt/ME
It shouldn't be device-dependent at all. I don't have reiser nor vfat, but my df -i display looks like this: per@io:~/workspace/ibwd> df -iT Filesystem Type Inodes IUsed IFree IUse% Mounted on /dev/hda3 jfs 2176480 366307 1810173 17% / udev tmpfs 129149 490 128659 1% /dev /dev/hda1 jfs 98720 62 98658 1% /boot /dev/hda4 jfs 11884320 285756 11598564 3% /home /Per Jessen, Zürich -- http://www.spamchek.com/ - managed anti-spam and anti-virus solution. Let us analyse your spam- and virus-threat - up to 2 months for free.
Per, On Monday 27 February 2006 03:47, Per Jessen wrote:
Mike McMullin wrote:
Is this peculiar to the sd? devices? I opened a shell and got this on my reiser file systems (and one vfat):
...
It shouldn't be device-dependent at all. I don't have reiser nor vfat, but my df -i display looks like this:
It's not device-dependent, but it is file-system-type-specific: % df -iT Filesystem Type Inodes IUsed IFree IUse% Mounted on LABEL=Root10 xfs 35913216 527149 35386067 2% / tmpfs tmpfs 223823 2 223821 1% /dev/shm /dev/sda1 reiserfs 0 0 0 - /repo /dev/sdb1 xfs 20972800 546688 20426112 3% /root93 /dev/sdd1 xfs 11972544 352569 11619975 3% /root91 /dev/sdd2 xfs 10993744 93990 10899754 1% /home /dev/sdd3 xfs 11971584 6205 11965379 1% /dar
...
/Per Jessen, Zürich
Randall Schulz
On Mon, 2006-02-27 at 12:47 +0100, Per Jessen wrote:
Mike McMullin wrote:
Is this peculiar to the sd? devices? I opened a shell and got this on my reiser file systems (and one vfat):
linux:/home/Mike # df -i Filesystem Inodes IUsed IFree IUse% Mounted on /dev/hdc2 0 0 0 - / tmpfs 96988 2 96986 1% /dev/shm /dev/hdc3 0 0 0 - /home /dev/hdc4 0 0 0 - /mnt/Linux /dev/hda1 0 0 0 - /mnt/ME
It shouldn't be device-dependent at all. I don't have reiser nor vfat, but my df -i display looks like this:
per@io:~/workspace/ibwd> df -iT Filesystem Type Inodes IUsed IFree IUse% Mounted on /dev/hda3 jfs 2176480 366307 1810173 17% / udev tmpfs 129149 490 128659 1% /dev /dev/hda1 jfs 98720 62 98658 1% /boot /dev/hda4 jfs 11884320 285756 11598564 3% /home
I get the same thing as above except for the addition of the file system type. No indication of what if any inodes are used.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 The Monday 2006-02-27 at 06:12 -0500, Mike McMullin wrote:
Some filesystems have options for specifying that on e.g. mkfs.xxx, but not all. mk2efs has a "-N number-of-inodes" option for instance.
Is this peculiar to the sd? devices? I opened a shell and got this on my reiser file systems (and one vfat):
No, it is reiserfs that is peculiar.
linux:/home/Mike # df -i Filesystem Inodes IUsed IFree IUse% Mounted on /dev/hdc2 0 0 0 - / tmpfs 96988 2 96986 1% /dev/shm
See some samples from mine: /dev/hdd6 1966976 261343 1705633 14% / /dev/hdb2 6024 46 5978 1% /boot /dev/hdd8 12586880 25881 12560999 1% /home /dev/hdd9 0 0 0 - /xtr /dev/hdd6 on / type ext3 (rw,noatime,acl,user_xattr) /dev/hdb2 on /boot type ext2 (rw,noatime) /dev/hdd8 on /home type xfs (rw,noatime) /dev/hdd9 on /xtr type reiserfs (rw,acl,user_xattr) ext2, etx3 and xfs use inones. Reiserfs doesnt. - -- Cheers, Carlos Robinson -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.0 (GNU/Linux) Comment: Made with pgp4pine 1.76 iD8DBQFEAxxvtTMYHG2NR9URAiyuAJ4nAUUyzGcHfzXAlHQHAbKNvF7aPQCfYqOc dtS4OfohdrHsrjP4kUPY/pQ= =Ylax -----END PGP SIGNATURE-----
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 The Sunday 2006-02-26 at 23:55 +0100, Istvan Gabor wrote: ...
Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sda4 1221600 1221600 0 100% /home/user/ sda4
As you can see it happened exactly what he supposed. All the inodes are used. That caused the problem.
I thought of that... but as I have never experienced it, I didn't recognize the symptoms. I discarded the posibility <:-}
10 GB place should be enough nevertheless. The question now is how I can set up the partiton so that the same size would have more inodes.
It is possible to adjust the inode ratio, in Yast, or manually: mke2fs - create an ext2/ext3 filesystem ... -i bytes-per-inode Specify the bytes/inode ratio. mke2fs creates an inode for every bytes-per-inode bytes of space on the disk. The -N number-of-inodes overrides the default calculation of the number of inodes -T fs-type Specify how the filesystem is going to be used, so that mke2fs can chose optimal filesystem parameters for that use. The supported filesystem types are: news one inode per 4kb block largefile one inode per megabyte largefile4 one inode per 4 megabytes But I would simply switch to reiserfs. You said:
As the freedb database contains zillions of small files it
That's perfect for reiserfs. Ideal for you ;-) At least, try, I'd like to know if the untarring runs faster on reiserfs under the same conditions - I'm very curious, it should be much faster ;-) - -- Cheers, Carlos Robinson -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.0 (GNU/Linux) Comment: Made with pgp4pine 1.76 iD8DBQFEAx6atTMYHG2NR9URAh2AAJ9O5V0mBTLDJV/fwXDSQ+EClJuNDQCfTyUa HfgaPb15TCGRxq+wsVcZJXA= =8oa4 -----END PGP SIGNATURE-----
participants (10)
-
Carl Hartung
-
Carlos E. R.
-
Istvan Gabor
-
Ken Schneider
-
Mike McMullin
-
Per Jessen
-
Randall R Schulz
-
Richard Atcheson
-
Robert Morrison
-
Susemail