[opensuse] usable free space on an ext4 file system
Hello: I would like to find out how much space is available for a regular user on an ext4 file system. If I know correctly the system reserves some space (~5%) for root only acces. I made a ~100 M ext4 file system for testing, and would like to find out how much space is available for a user. tune2fs shows this: # tune2fs -l /dev/sdc1 tune2fs 1.42.8 (20-Jun-2013) Filesystem volume name: <none> Last mounted on: /mnt1 Filesystem UUID: 740ccbf0-da58-4f69-80cc-bc48396345ce Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: user_xattr acl Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 24096 Block count: 96356 Reserved block count: 4817 Free blocks: 87661 Free inodes: 24085 First block: 1 Block size: 1024 Fragment size: 1024 Reserved GDT blocks: 256 Blocks per group: 8192 Fragments per group: 8192 Inodes per group: 2008 Inode blocks per group: 251 Flex block group size: 16 Filesystem created: Thu Feb 13 20:18:41 2020 Last mount time: Thu Feb 13 20:18:55 2020 Last write time: Thu Feb 13 20:18:55 2020 Mount count: 1 Maximum mount count: -1 Last checked: Thu Feb 13 20:18:41 2020 Check interval: 0 (<none>) Lifetime writes: 4483 kB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 128 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: 5a3a878d-c9e3-4e36-b939-1ca844f69f0c Journal backup: inode blocks df shows this: # df /dev/sdc1 Filesystem 1K-blocks Used Available Use% Mounted on /dev/sdc1 89211 1551 80916 2% /mnt1 If I sum the numbers they don't match, eg. df's 1551+80916 does not give 89211. Furthermore tune2fs' block count is 96356, differs from the above too. If I add tune2fs' reserved and free block (4817+87661) it doesn't give 96356. I would like to know how to interpret the reported block numbers and how can I calculate the space available to regular users and root. Thanks, Istvan -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Thu, 13 Feb 2020 21:29:37 +0100
Istvan Gabor
Hello:
I would like to find out how much space is available for a regular user on an ext4 file system.
If I know correctly the system reserves some space (~5%) for root only acces.
I made a ~100 M ext4 file system for testing, and would like to find out how much space is available for a user.
tune2fs shows this:
# tune2fs -l /dev/sdc1 tune2fs 1.42.8 (20-Jun-2013) Filesystem volume name: <none> Last mounted on: /mnt1 Filesystem UUID: 740ccbf0-da58-4f69-80cc-bc48396345ce Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: user_xattr acl Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 24096 Block count: 96356 Reserved block count: 4817 Free blocks: 87661 Free inodes: 24085 First block: 1 Block size: 1024 Fragment size: 1024 Reserved GDT blocks: 256 Blocks per group: 8192 Fragments per group: 8192 Inodes per group: 2008 Inode blocks per group: 251 Flex block group size: 16 Filesystem created: Thu Feb 13 20:18:41 2020 Last mount time: Thu Feb 13 20:18:55 2020 Last write time: Thu Feb 13 20:18:55 2020 Mount count: 1 Maximum mount count: -1 Last checked: Thu Feb 13 20:18:41 2020 Check interval: 0 (<none>) Lifetime writes: 4483 kB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 128 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: 5a3a878d-c9e3-4e36-b939-1ca844f69f0c Journal backup: inode blocks
df shows this:
# df /dev/sdc1 Filesystem 1K-blocks Used Available Use% Mounted on /dev/sdc1 89211 1551 80916 2% /mnt1
If I sum the numbers they don't match, eg. df's 1551+80916 does not give 89211.
Furthermore tune2fs' block count is 96356, differs from the above too.
If I add tune2fs' reserved and free block (4817+87661) it doesn't give 96356.
I would like to know how to interpret the reported block numbers and how can I calculate the space available to regular users and root.
Use reiserfs or xfs instead of ext4 (or btrfs) :) The former two give accurate figures, the latter two don't! The df docs I've found don't even define what the right answer is supposed to be, AFAICT. You could try creating a load of files until the filesystem filled up. That would give you the answer, then delete them all.
Thanks,
Istvan
-- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 13/02/2020 21.44, Dave Howorth wrote: | On Thu, 13 Feb 2020 21:29:37 +0100 Istvan Gabor <> wrote: |> I would like to know how to interpret the reported block numbers |> and how can I calculate the space available to regular users and |> root. | | Use reiserfs or xfs instead of ext4 (or btrfs) :) | | The former two give accurate figures, the latter two don't! | | The df docs I've found don't even define what the right answer is | supposed to be, AFAICT. | | You could try creating a load of files until the filesystem filled | up. That would give you the answer, then delete them all. The result would depend on the file sizes, their granularity compared to the sector size. Better create a single large file, as big as it can grow. dd can do it. BUT, the available file space on xfs depends on how many files you create, because the inodes are dynamic. And on reiserfs... even more so. - -- Cheers / Saludos, Carlos E. R. (from 15.1 x86_64 at Telcontar) -----BEGIN PGP SIGNATURE----- iF0EARECAB0WIQQZEb51mJKK1KpcU/W1MxgcbY1H1QUCXkW/agAKCRC1MxgcbY1H 1QHfAJ4vKKeZYvlwjY20YhTCcIzNk115uwCfVXUMOjjHIGd04Bzoeyh5RkSV1DU= =Cpt0 -----END PGP SIGNATURE----- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 13/02/2020 15:29, Istvan Gabor wrote:
I would like to find out how much space is available for a regular user on an ext4 file system.
ROTFLMAO! For ALL the file systems I've ever dealt with the concepts of "usable space" and "Free space" are distinct. Nn Ancient of Days, that is before the Internet, before Linux, before the UNIX Systems Group got it's sticky fingers on the the source, there was the traditional, known as V7, file system. This offered stability and integrity over the V6 file system by ordering writes to metadata. The V6/V7 set a tradition in file system that has held for nearly half a century now: that there is i-node space and there is data space and one may become exhausted before the other. Now there are file systems that break with that tradition, most visibly ReiserFS and BtrFS. Each them allocates BOTH i-nodes and data dynamically from a pool, all of that being managed by B-tree lists. Because BOTH come from the same pool one can never be exhausted before the other. I'm an advocate of ReiserFS and consider pre-allocation for be a Major Sin of File System design. Not Ext4 adherents might claim B-tree-ness, but they still have this inode space division and the amount of inodes determined at MKFS-time. In my opinion the ratio of the number of inodes to the file system space is huge, but YMMV. The preallocation means one can run out before the other, usually data space running out before inodes. Yes, you can tune the bytes-per-inode, number of inodes and the inode size. great if you have done metrication and probably a lot of step-and-repeat and load testing and logged your results and understand they dynamics of the application or application set that will be using the file system and can systematically justify your settings. But I don't think this is what Istvan is asking for. The TRUE B-tree file systems like ReiserFS and BtrFs WILL tell you how much free space is available. What they won't tell you is how that is contained, because it isn't. Running stat's on those file systems won't tell you what percentage of the inodes are used and hence how many files you can create now, because the concept is meaningless. Let me insert a sort-of-sidebar here. We still have a legacy from the V6/V7 days of reporting file systems sizes in 512 byte blocks. But we gave up on using 512 blocks about the time the DEC VAX was introduced. The Berkeley Fast File System moved to 4K blocks for a variety of reasons and we've been there ever since. Initially (and still) the kernel did the re-blocking of the locial-4K to the disk's 512, but recently disk manufacturers have caught up with the idea. I won't go into the justifications for of the analysis behind this, just say that bigger script programs and larger binaries were contributing factors. See PAGESIZE(1) One thing did emerge from the 4K blocks. We still had many files that were small. Take a look at the files under /etc and see how many are under 512 bytes. Come to that, see how many are under 64 or even 32 bytes. find /etc -type f -size -2b -print0 | xargs -0 ls -l Some file systems try putting those very small files into the inode space. Consider: the inode space has room for a pointers to the direct and possibly indirect blocks. How big are those fields? https://ext4.wiki.kernel.org/index.php/Ext4_Disk_Layout#Inline_Data So" in theory, at least. it is possible, if all the data space of an ext4FS is exhausted before the inode space, to create a small file whose data resides in the inode. Now do you see why the concept of "usable" and "free" space is pretty arbitrary? The only thing that matter becomes "is there enough for the needs?" As illustrated above, if the needs is just a few bytes like the dozen or so of a hostname, ... -- A: Yes. > Q: Are you sure? >> A: Because it reverses the logical flow of conversation. >>> Q: Why is top posting frowned upon? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Fri, 14 Feb 2020 09:00:28 -0500
Anton Aylward
On 13/02/2020 15:29, Istvan Gabor wrote:
I would like to find out how much space is available for a regular user on an ext4 file system.
ROTFLMAO!
For ALL the file systems I've ever dealt with the concepts of "usable space" and "Free space" are distinct.
Nn Ancient of Days, that is before the Internet, before Linux, before the UNIX Systems Group got it's sticky fingers on the the source, there was the traditional, known as V7, file system. This offered stability and integrity over the V6 file system by ordering writes to metadata.
The V6/V7 set a tradition in file system that has held for nearly half a century now: that there is i-node space and there is data space and one may become exhausted before the other.
Now there are file systems that break with that tradition, most visibly ReiserFS and BtrFS. Each them allocates BOTH i-nodes and data dynamically from a pool, all of that being managed by B-tree lists. Because BOTH come from the same pool one can never be exhausted before the other.
I'm an advocate of ReiserFS and consider pre-allocation for be a Major Sin of File System design.
Not Ext4 adherents might claim B-tree-ness, but they still have this inode space division and the amount of inodes determined at MKFS-time. In my opinion the ratio of the number of inodes to the file system space is huge, but YMMV. The preallocation means one can run out before the other, usually data space running out before inodes. Yes, you can tune the bytes-per-inode, number of inodes and the inode size. great if you have done metrication and probably a lot of step-and-repeat and load testing and logged your results and understand they dynamics of the application or application set that will be using the file system and can systematically justify your settings.
But I don't think this is what Istvan is asking for.
The TRUE B-tree file systems like ReiserFS and BtrFs WILL tell you how much free space is available. What they won't tell you is how that is contained, because it isn't. Running stat's on those file systems won't tell you what percentage of the inodes are used and hence how many files you can create now, because the concept is meaningless.
Let me insert a sort-of-sidebar here. We still have a legacy from the V6/V7 days of reporting file systems sizes in 512 byte blocks. But we gave up on using 512 blocks about the time the DEC VAX was introduced. The Berkeley Fast File System moved to 4K blocks for a variety of reasons and we've been there ever since. Initially (and still) the kernel did the re-blocking of the locial-4K to the disk's 512, but recently disk manufacturers have caught up with the idea. I won't go into the justifications for of the analysis behind this, just say that bigger script programs and larger binaries were contributing factors.
See PAGESIZE(1)
One thing did emerge from the 4K blocks. We still had many files that were small. Take a look at the files under /etc and see how many are under 512 bytes. Come to that, see how many are under 64 or even 32 bytes.
find /etc -type f -size -2b -print0 | xargs -0 ls -l
Some file systems try putting those very small files into the inode space. Consider: the inode space has room for a pointers to the direct and possibly indirect blocks. How big are those fields? https://ext4.wiki.kernel.org/index.php/Ext4_Disk_Layout#Inline_Data
So" in theory, at least. it is possible, if all the data space of an ext4FS is exhausted before the inode space, to create a small file whose data resides in the inode.
Now do you see why the concept of "usable" and "free" space is pretty arbitrary? The only thing that matter becomes "is there enough for the needs?" As illustrated above, if the needs is just a few bytes like the dozen or so of a hostname, ...
I wonder if anybody else got to the end and wondered, like me, why XFS didn't get a mention? JFS and ZFS I can understand, but XFS is a mainstream system. Oh, and it's well-known that btrfs doesn't tell you the truth unless you ask it in a very particular way. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 14/02/2020 09:54, Dave Howorth wrote:
I wonder if anybody else got to the end and wondered, like me, why XFS didn't get a mention? JFS and ZFS I can understand, but XFS is a mainstream system.
I deliberately left XFS out because .... well .. it's complicated. I'm not sure how the allocator in XFS works, but there's all manner of things that you can adjust and all many of constraints. And ambiguities. There is contraint that the inodes have to be in the lower portion; that there is a limit on what percentage of the FS's resources they can consume (which implied they are allocated dynamically). But block sizes and inode sizes can be adjusted. You can also preallocated, which is a curious distinction. Is XFS really B-tree inside? Well yes https://lwn.net/Articles/747633/ but only sort-of. Rather than the one B-tree approach of ReiserFS or BtrFS .. An XFS filesystem is split into allocation groups, "which are like mini-filesystems"; they have their own free-space index B-trees, inode B-trees, reverse-mapping B-trees, and so on. File data is referenced by extents, with the help of B-trees. "Directories and attributes are more B-trees"; the directory B-tree is the most complex as it is a "virtually mapped, multiple index B-tree with all sorts of hashing" for scalability. and a more sort of "well only in some circumstances" https://xfs.org/docs/xfsdocs-xml-dev/XFS_Filesystem_Structure/tmp/en-US/html... When the extent map in an inode grows beyond the inode's space, the inode format is changed to a "btree". The inode contains a filesystem block point to the B+tree extent map for the directory's blocks. So what was it before? As far as free space foes, a 'mature' or "aggressively used" XFS may also have more than one frees[ace list. which I'm sure leads to confusion when reporting. All in all I don't think I can talk of the dynamics of XFS in the same way that I can of ReiserFS or BtrFS. This is not to denigrate XFS. But really: I prefer ReaiserFS for the fact that there's not much fiddling with mkfs settings or tuning you can do. With Ext4, XFS and to a degree BtrFS, there's always a feeling that if you mkfs'd it differently, tuned it differently, it would be somehow "better" for your needs. I've not experimented with patching the kernel to work with Reiser4FS. I do wish that somehow there would be the commitment for it and to bring it into the mainstream kernel. -- A: Yes. > Q: Are you sure? >> A: Because it reverses the logical flow of conversation. >>> Q: Why is top posting frowned upon? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Anton, et al -- ...and then Anton Aylward said... % % On 14/02/2020 09:54, Dave Howorth wrote: % > I wonder if anybody else got to the end and wondered, like me, why XFS % > didn't get a mention? JFS and ZFS I can understand, but XFS is a % > mainstream system. % % I deliberately left XFS out because .... well .. it's complicated. Actually, I figured that it was because two examples were enough :-) I loved the reading, but I didn't expect we'd have to cover *every* other option... ... % % I've not experimented with patching the kernel to work with Reiser4FS. I do % wish that somehow there would be the commitment for it and to bring it into the % mainstream kernel. *sigh* +1 HANW :-D -- David T-G See http://justpickone.org/davidtg/email/ See http://justpickone.org/davidtg/tofu.txt -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 14/02/2020 15.54, Dave Howorth wrote:
On Fri, 14 Feb 2020 09:00:28 -0500 Anton Aylward
wrote: On 13/02/2020 15:29, Istvan Gabor wrote:
I would like to find out how much space is available for a regular user on an ext4 file system.
ROTFLMAO!
For ALL the file systems I've ever dealt with the concepts of "usable space" and "Free space" are distinct.
Nn Ancient of Days, that is before the Internet, before Linux, before the UNIX Systems Group got it's sticky fingers on the the source, there was the traditional, known as V7, file system. This offered stability and integrity over the V6 file system by ordering writes to metadata.
...
See PAGESIZE(1)
One thing did emerge from the 4K blocks. We still had many files that were small. Take a look at the files under /etc and see how many are under 512 bytes. Come to that, see how many are under 64 or even 32 bytes.
find /etc -type f -size -2b -print0 | xargs -0 ls -l
Just 1735 files.
Some file systems try putting those very small files into the inode space. Consider: the inode space has room for a pointers to the direct and possibly indirect blocks. How big are those fields? https://ext4.wiki.kernel.org/index.php/Ext4_Disk_Layout#Inline_Data
So" in theory, at least. it is possible, if all the data space of an ext4FS is exhausted before the inode space, to create a small file whose data resides in the inode.
Now do you see why the concept of "usable" and "free" space is pretty arbitrary? The only thing that matter becomes "is there enough for the needs?" As illustrated above, if the needs is just a few bytes like the dozen or so of a hostname, ...
I wonder if anybody else got to the end and wondered, like me, why XFS didn't get a mention? JFS and ZFS I can understand, but XFS is a mainstream system.
Me :-)
Oh, and it's well-known that btrfs doesn't tell you the truth unless you ask it in a very particular way.
Indeed! -- Cheers / Saludos, Carlos E. R. (from 15.1 x86_64 at Telcontar)
Le 14/02/2020 à 15:00, Anton Aylward a écrit :
Now do you see why the concept of "usable" and "free" space is pretty arbitrary?
simply compare du and df results... jdd -- http://dodin.org -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
13.02.2020 23:29, Istvan Gabor пишет:
Hello:
I would like to find out how much space is available for a regular user on an ext4 file system.
If I know correctly the system reserves some space (~5%) for root only acces.
I made a ~100 M ext4 file system for testing, and would like to find out how much space is available for a user.
tune2fs shows this:
# tune2fs -l /dev/sdc1 tune2fs 1.42.8 (20-Jun-2013) Filesystem volume name: <none> Last mounted on: /mnt1 Filesystem UUID: 740ccbf0-da58-4f69-80cc-bc48396345ce Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: user_xattr acl Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 24096 Block count: 96356 Reserved block count: 4817 Free blocks: 87661 Free inodes: 24085 First block: 1 Block size: 1024 Fragment size: 1024 Reserved GDT blocks: 256 Blocks per group: 8192 Fragments per group: 8192 Inodes per group: 2008 Inode blocks per group: 251 Flex block group size: 16 Filesystem created: Thu Feb 13 20:18:41 2020 Last mount time: Thu Feb 13 20:18:55 2020 Last write time: Thu Feb 13 20:18:55 2020 Mount count: 1 Maximum mount count: -1 Last checked: Thu Feb 13 20:18:41 2020 Check interval: 0 (<none>) Lifetime writes: 4483 kB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 128 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: 5a3a878d-c9e3-4e36-b939-1ca844f69f0c Journal backup: inode blocks
df shows this:
# df /dev/sdc1 Filesystem 1K-blocks Used Available Use% Mounted on /dev/sdc1 89211 1551 80916 2% /mnt1
If I sum the numbers they don't match, eg. df's 1551+80916 does not give 89211.
"Used" is synthetic value which is "total - free". "Available" is "free - root_reserve - extra_reserve". Reserved root blocks is 4817 and extra reserve is 1927 (it is 2% up to 4096 blocks). So 89211 - 1551 - 4817 - 1927 == 80916. Yes, extra reserve seems to be undocumented.
Furthermore tune2fs' block count is 96356, differs from the above too.
If I add tune2fs' reserved and free block (4817+87661) it doesn't give 96356.
There is additional overhead. It is not fixed and depends on actual filesystem layout. At the end filesystem has 89211 blocks it can use for data.
I would like to know how to interpret the reported block numbers and how can I calculate the space available to regular users and root.
I am not sure if root is allowed to explicitly consume extra reserve (2% in your case), but this space may be allocated during filesystem operations. I think for *data* root in your case has 89211 - 1927 and users 89211 - 1927 - 4817.
Thanks,
Istvan
-- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Fri, 14 Feb 2020 22:13:21 +0300
Andrei Borzenkov
13.02.2020 23:29, Istvan Gabor пишет:
Hello:
I would like to find out how much space is available for a regular user on an ext4 file system.
If I know correctly the system reserves some space (~5%) for root only acces.
I made a ~100 M ext4 file system for testing, and would like to find out how much space is available for a user.
tune2fs shows this:
# tune2fs -l /dev/sdc1 tune2fs 1.42.8 (20-Jun-2013) Filesystem volume name: <none> Last mounted on: /mnt1 Filesystem UUID: 740ccbf0-da58-4f69-80cc-bc48396345ce Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: user_xattr acl Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 24096 Block count: 96356 Reserved block count: 4817 Free blocks: 87661 Free inodes: 24085 First block: 1 Block size: 1024 Fragment size: 1024 Reserved GDT blocks: 256 Blocks per group: 8192 Fragments per group: 8192 Inodes per group: 2008 Inode blocks per group: 251 Flex block group size: 16 Filesystem created: Thu Feb 13 20:18:41 2020 Last mount time: Thu Feb 13 20:18:55 2020 Last write time: Thu Feb 13 20:18:55 2020 Mount count: 1 Maximum mount count: -1 Last checked: Thu Feb 13 20:18:41 2020 Check interval: 0 (<none>) Lifetime writes: 4483 kB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 128 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: 5a3a878d-c9e3-4e36-b939-1ca844f69f0c Journal backup: inode blocks
df shows this:
# df /dev/sdc1 Filesystem 1K-blocks Used Available Use% Mounted on /dev/sdc1 89211 1551 80916 2% /mnt1
If I sum the numbers they don't match, eg. df's 1551+80916 does not give 89211.
"Used" is synthetic value which is "total - free". "Available" is "free - root_reserve - extra_reserve". Reserved root blocks is 4817 and extra reserve is 1927 (it is 2% up to 4096 blocks). So
89211 - 1551 - 4817 - 1927 == 80916.
Yes, extra reserve seems to be undocumented.
Furthermore tune2fs' block count is 96356, differs from the above too.
If I add tune2fs' reserved and free block (4817+87661) it doesn't give 96356.
There is additional overhead. It is not fixed and depends on actual filesystem layout. At the end filesystem has 89211 blocks it can use for data.
I would like to know how to interpret the reported block numbers and how can I calculate the space available to regular users and root.
I am not sure if root is allowed to explicitly consume extra reserve (2% in your case), but this space may be allocated during filesystem operations. I think for *data* root in your case has 89211 - 1927 and users 89211 - 1927 - 4817.
As ever, thanks :) for the great answers. :)
Thanks,
Istvan
-- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Fri, 14 Feb 2020 22:13:21 +0300, Andrei Borzenkov wrote:
13.02.2020 23:29, Istvan Gabor пишет:
Hello:
I would like to find out how much space is available for a regular user on an ext4 file system.
If I know correctly the system reserves some space (~5%) for root only acces.
I made a ~100 M ext4 file system for testing, and would like to find out how much space is available for a user.
tune2fs shows this:
# tune2fs -l /dev/sdc1 tune2fs 1.42.8 (20-Jun-2013) Filesystem volume name: <none> Last mounted on: /mnt1 Filesystem UUID: 740ccbf0-da58-4f69-80cc-bc48396345ce Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: user_xattr acl Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 24096 Block count: 96356 Reserved block count: 4817 Free blocks: 87661 Free inodes: 24085 First block: 1 Block size: 1024 Fragment size: 1024 Reserved GDT blocks: 256 Blocks per group: 8192 Fragments per group: 8192 Inodes per group: 2008 Inode blocks per group: 251 Flex block group size: 16 Filesystem created: Thu Feb 13 20:18:41 2020 Last mount time: Thu Feb 13 20:18:55 2020 Last write time: Thu Feb 13 20:18:55 2020 Mount count: 1 Maximum mount count: -1 Last checked: Thu Feb 13 20:18:41 2020 Check interval: 0 (<none>) Lifetime writes: 4483 kB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 128 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: 5a3a878d-c9e3-4e36-b939-1ca844f69f0c Journal backup: inode blocks
df shows this:
# df /dev/sdc1 Filesystem 1K-blocks Used Available Use% Mounted on /dev/sdc1 89211 1551 80916 2% /mnt1
If I sum the numbers they don't match, eg. df's 1551+80916 does not give 89211.
"Used" is synthetic value which is "total - free". "Available" is "free - root_reserve - extra_reserve". Reserved root blocks is 4817 and extra reserve is 1927 (it is 2% up to 4096 blocks). So
89211 - 1551 - 4817 - 1927 == 80916.
Yes, extra reserve seems to be undocumented.
Furthermore tune2fs' block count is 96356, differs from the above too.
If I add tune2fs' reserved and free block (4817+87661) it doesn't give 96356.
There is additional overhead. It is not fixed and depends on actual filesystem layout. At the end filesystem has 89211 blocks it can use for data.
I would like to know how to interpret the reported block numbers and how can I calculate the space available to regular users and root.
I am not sure if root is allowed to explicitly consume extra reserve (2% in your case), but this space may be allocated during filesystem operations. I think for *data* root in your case has 89211 - 1927 and users 89211 - 1927 - 4817.
Thank you all for responding, especially Andrei and Anton. Where did you get the number 1927? Did you just calculated from the available data? Is there a way to get it reported? What I want to do is to make one container/image file on the whole partition to fill it as much as possible. How do I calculate the maximum file size in this case, for only one big file? When I tried to create a 80916K file (as normal user) the process failed. When in my home directory I created a container file exactly 80916K size, I could not copy it to the mentioned partition as regular user but could copy to it as root. How can I calculate the difference between 80916 and the max files size allowed on the partition (for only 1 file)? Thanks again, Istvan -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
15.02.2020 13:41, Istvan Gabor пишет:
On Fri, 14 Feb 2020 22:13:21 +0300, Andrei Borzenkov wrote:
13.02.2020 23:29, Istvan Gabor пишет:
Hello:
I would like to find out how much space is available for a regular user on an ext4 file system.
If I know correctly the system reserves some space (~5%) for root only acces.
I made a ~100 M ext4 file system for testing, and would like to find out how much space is available for a user.
tune2fs shows this:
# tune2fs -l /dev/sdc1 tune2fs 1.42.8 (20-Jun-2013) Filesystem volume name: <none> Last mounted on: /mnt1 Filesystem UUID: 740ccbf0-da58-4f69-80cc-bc48396345ce Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: user_xattr acl Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 24096 Block count: 96356 Reserved block count: 4817 Free blocks: 87661 Free inodes: 24085 First block: 1 Block size: 1024 Fragment size: 1024 Reserved GDT blocks: 256 Blocks per group: 8192 Fragments per group: 8192 Inodes per group: 2008 Inode blocks per group: 251 Flex block group size: 16 Filesystem created: Thu Feb 13 20:18:41 2020 Last mount time: Thu Feb 13 20:18:55 2020 Last write time: Thu Feb 13 20:18:55 2020 Mount count: 1 Maximum mount count: -1 Last checked: Thu Feb 13 20:18:41 2020 Check interval: 0 (<none>) Lifetime writes: 4483 kB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 128 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: 5a3a878d-c9e3-4e36-b939-1ca844f69f0c Journal backup: inode blocks
df shows this:
# df /dev/sdc1 Filesystem 1K-blocks Used Available Use% Mounted on /dev/sdc1 89211 1551 80916 2% /mnt1
If I sum the numbers they don't match, eg. df's 1551+80916 does not give 89211.
"Used" is synthetic value which is "total - free". "Available" is "free - root_reserve - extra_reserve". Reserved root blocks is 4817 and extra reserve is 1927 (it is 2% up to 4096 blocks). So
89211 - 1551 - 4817 - 1927 == 80916.
Yes, extra reserve seems to be undocumented.
Furthermore tune2fs' block count is 96356, differs from the above too.
If I add tune2fs' reserved and free block (4817+87661) it doesn't give 96356.
There is additional overhead. It is not fixed and depends on actual filesystem layout. At the end filesystem has 89211 blocks it can use for data.
I would like to know how to interpret the reported block numbers and how can I calculate the space available to regular users and root.
I am not sure if root is allowed to explicitly consume extra reserve (2% in your case), but this space may be allocated during filesystem operations. I think for *data* root in your case has 89211 - 1927 and users 89211 - 1927 - 4817.
The extra reserve is for internal filesystem use only, so is not available to root as well.
Thank you all for responding, especially Andrei and Anton.
Where did you get the number 1927? Did you just calculated from the available data?
It is 2% of total block count up to 4096.
Is there a way to get it reported?
Not that I am aware of.
What I want to do is to make one container/image file on the whole partition to fill it as much as possible. How do I calculate the maximum file size in this case, for only one big file?
When I tried to create a 80916K file (as normal user) the process failed.
Traditionally large files (where "large" was about several dozens KB) needed indirect blocks in addition to actual data blocks. Today ext4 implements extent based allocation, I do not know details, but it is quite possible that there is extra overhead as well.
When in my home directory I created a container file exactly 80916K size, I could not copy it to the mentioned partition as regular user but could copy to it as root.
Well, root has extra 4817 blocks.
How can I calculate the difference between 80916 and the max files size allowed on the partition (for only 1 file)?
I am afraid you need to study ext4 code to learn how it implements space allocation and what overhead it has. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
15.02.2020 22:52, Andrei Borzenkov пишет:
When I tried to create a 80916K file (as normal user) the process failed.
Traditionally large files (where "large" was about several dozens KB) needed indirect blocks in addition to actual data blocks. Today ext4 implements extent based allocation, I do not know details, but it is quite possible that there is extra overhead as well.
Yes, same with extents. There is no contiguous range of 80916 blocks, so file has 11 extents (size of each extent varies). Only 4 extent descriptors fit into inode, so additional block is required to hold the rest. Exactly as it were with indirect blocks (except more space would be required).
When in my home directory I created a container file exactly 80916K size, I could not copy it to the mentioned partition as regular user but could copy to it as root.
Well, root has extra 4817 blocks.
How can I calculate the difference between 80916 and the max files size allowed on the partition (for only 1 file)?
I am afraid you need to study ext4 code to learn how it implements space allocation and what overhead it has.
Still applies :) You would need to ensure that number of extents does not exceed 4, e.g. select appropriate block size and block group size (they are related). With mkfs -t ext4 -b 4096 -g 32768 I was able to use full space shown as "Available". You may be able to gain even more space by reducing inodes count, space reserved for future growth etc. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Sun, 16 Feb 2020 10:53:04 +0300, Andrei Borzenkov wrote:
15.02.2020 22:52, Andrei Borzenkov пишет:
When I tried to create a 80916K file (as normal user) the process failed.
Traditionally large files (where "large" was about several dozens KB) needed indirect blocks in addition to actual data blocks. Today ext4 implements extent based allocation, I do not know details, but it is quite possible that there is extra overhead as well.
Yes, same with extents. There is no contiguous range of 80916 blocks, so file has 11 extents (size of each extent varies). Only 4 extent descriptors fit into inode, so additional block is required to hold the rest. Exactly as it were with indirect blocks (except more space would be required).
When in my home directory I created a container file exactly 80916K size, I could not copy it to the mentioned partition as regular user but could copy to it as root.
Well, root has extra 4817 blocks.
How can I calculate the difference between 80916 and the max files size allowed on the partition (for only 1 file)?
I am afraid you need to study ext4 code to learn how it implements space allocation and what overhead it has.
Still applies :)
You would need to ensure that number of extents does not exceed 4, e.g. select appropriate block size and block group size (they are related). With
mkfs -t ext4 -b 4096 -g 32768
I was able to use full space shown as "Available".
You may be able to gain even more space by reducing inodes count, space reserved for future growth etc.
Andrei, thank you very much for looking into this so detailed. I guess I solved my problem. I made an empty directory on the partition. It takes one 1K block space. A small non-empty text file also uses one 1K block. I could make a container sized one 1K block less than available space, and it occupies 100% of the partition with 0 free block remaining. Thank you again, Istvan -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Sun, 16 Feb 2020 23:11:02 +0100, Istvan Gabor wrote: [Long snip]
Andrei, thank you very much for looking into this so detailed.
I guess I solved my problem. I made an empty directory on the partition. It takes one 1K block space. A small non-empty text file also uses one 1K block. I could make a container sized one 1K block less than available space, and it occupies 100% of the partition with 0 free block remaining.
What I wrote above only applies to openSUSE Leap (tried in 42.2, 15.1). When I want to copy the same file to the same partition in openSUSE 13.1, it fails. In summary: /mnt1 is ext4 file system with 80916 available blocks:
df /mnt1 Filesystem 1K-blocks Used Available Use% Mounted on /dev/sdc1 89211 1551 80916 2% /mnt1
The file I want to copy is 80915 blocks:
ls -Gg --block-size=K testfile -rw------- 1 80915K Feb 17 18:00 testfile
I can copy testfile to /mnt1 in Leap 15.1 as regular user. I can copy testfile to /mnt1 in openSUSE 13.1 as root. If I try to copy testfile to /mnt1 in 13.1 as normal user, it fails:
cp -a testfile /mnt1 cp: error writing ‘/mnt1/testfile’: No space left on device cp: failed to extend ‘/mnt1/testfile’: No space left on device
I do know that 13.1 is obsolete, still I would like to know what causes the difference and if it can be fixed (in 13.1). Thanks, Istvan -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 2020-02-13 21:29, Istvan Gabor wrote:
df shows this:
# df /dev/sdc1 Filesystem 1K-blocks Used Available Use% Mounted on /dev/sdc1 89211 1551 80916 2% /mnt1
If I sum the numbers they don't match, eg. df's 1551+80916 does not give 89211.
https://www.gnu.org/software/coreutils/faq/coreutils-faq.html#df-Size-and-Us... Have fun, Berny -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
participants (8)
-
Andrei Borzenkov
-
Anton Aylward
-
Bernhard Voelker
-
Carlos E. R.
-
Dave Howorth
-
David T-G
-
Istvan Gabor
-
jdd@dodin.org