Greg Freemyer said the following on 02/15/2010 11:40 AM:
Given that modern file systems are fragmentation resistant, please explain how fragmentation is a problem on Linux.
I don't have any stats to back it up, but xfs has had a defragger years and ext4 has a semi-released defragger. (The ext4 version is in the kernel and official userspace tarball, but is not fully tested / reviewed per Ted Tso the ext4 maintainer.)
I don't know if and when fragmentation becomes bad enough to use these tools, but they would not exist if the answer was never.
let talk metrics: 1. What degree of file 'churn' is necessary before any fragmentation that might occur on a modern file system such as reiser, ext2/3/4, xfs. or even just the old berkeley model FFS get bad enough to impact performance? 2. What is the effort/time required to 'defrag' and how close does that come to a fully 'optimal' ile system layout, and by what criteria are you measuring optimality. As I mentioned, I did relevant measurements on the V7 file system abck in the 80's, wih drives that were a LOT slower than today. The V7 file system is very prone to churn causing framentation. I found out some interesting things. a) having separate system and user and tmp paritions helped more than trying to defrag having them all in one partition. b) most churn occured on /tmp. If you re-mkfs'd /tmp evey few days it did wonders. c) a faster drive outperformed an optimally (aka newly installed, no frag) old drive. Even after it had "churned" for a couuuule of months. d) more RAM allowed for more buffers and less swapping/paging and that meant less disk activity sooomore bandwidth was available for file activity. Since the mid 80s we've had, as I keep saying, much better file system design. The amount of fragmentation caused by file churn is way, way down. Does it happen? Yes. Does it matter? No. Saying that XFS and ext4 have 'in kernel defraggers' is not quite correct. You might as well say that Reiser has them too and even the old bekeley FFS had them - just using 'cylinder groups' kept fragmentation down. There in the kernel is code that re-packs small files into avilable space or occassionally moves blocks. Its not a new idea. I'd hardly call it a defragger if it is an intrinsic part of the way the FS works. As such, a defrag "product" or "tool" would take effort and give little benefit. I you want to call this a 'myth', then go ahead. The reality is that defrag belongs in Windows MS-DOS FS and other antiquated file systems. Fragmentation is more of a problem on FAT file systems than NTFS, I'm told, largely because the FAT file system is old and simplistic, dating back to the PDP-8. Run fsck on your file systems. It should tell you the degree of fragmentation. My root filesystem, after 3 years, upgraded from 11.0 to 11.1, regular updates, has only 2% non contiguous files. My /tmp FS, which gets cleaned out regularly though not mkfs'd, has less than that. In fact I've just stepped though running fsck on my various fs. The one that gets most churn apart from /tmp is the one I use for downloads. Downloads: 4902/66096 files (3.3% non-contiguous), 86176/264192 blocks That's a ext3 FS. None of my resierfs are above 0.5%. Myth? Call it that if you want. I'd call it good design. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org