How do I know when and how to defrag my computer with open suse , or don't I have to worry about it? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
John Heinen said the following on 02/13/2010 09:35 AM:
How do I know when and how to defrag my computer with open suse , or don't I have to worry about it?
Don't worry about it. I'm far from convinced defrag makes sense, even under Windows. Back in the late 80s I wrote a defrag for SCO UNIX. When I profiled it I found that a) the improvements were slight b) buying more memory and hence more buffer and less swap was a better investment c) if your disk is properly partitioned they the "system" part doesn't suffer "churn" Our file systems are a LOT LOT better now. Space allocation strategies are way better. Its my opinion that the milliseconds you _might_ save by having a defraged disk are going to be swamped by the time it takes to defrag the disk. If you're talking about a database, that's an entirely different problem. Most "slow" databases I've seen were the result of poor schema analysis and design and poor index structure. Nothing to do with the disk. -- Quality is free, but only for those who are willing to pay heavily for it. - Tom Demarco, "Peopleware" -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
John Heinen wrote:
How do I know when and how to defrag my computer with open suse , or don't I have to worry about it?
Don't worry about it. Linux file systems are fragmentation resistant. It's only with Window that you get much benefit from defraging. It's beyond belief that MS still uses such an archaic file system. OS/2 had a fragmentation resistant file system (HPFS) over twenty years ago. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On 2010/02/14 09:16 (GMT-0500) James Knott composed:
OS/2 had a fragmentation resistant file system (HPFS) over twenty years ago.
It still has it, and I still use it. :-) -- "Our Constitution was made only for a moral and religious people. It is wholly inadequate to the government of any other." John Adams, 2nd US President Team OS/2 ** Reg. Linux User #211409 Felix Miata *** http://fm.no-ip.com/ -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
Felix Miata wrote:
On 2010/02/14 09:16 (GMT-0500) James Knott composed:
OS/2 had a fragmentation resistant file system (HPFS) over twenty years ago.
It still has it, and I still use it. :-)
Speaking of OS/2, it came out in a lawsuit involving MS that IBM "threw in the towel" on OS/2 because developers HAD to sign a contract that if they wrote for Windows, they COULD NOT write for anything else. THANK GOD FOR OSS - MS can't stop it - they can only - steal - from it. Speaking of stealing, it also came out, that while everyone was loving DOS, MS decided for FORCE windows on the public by making PC manufacturers buy a license for windows - whether they installed it or not. For the curious, this an more can be found in my semi-hidden business webpage that I've been building since 1998. (Which is when I dumped windows and switched to Linux, over the lawsuit involving MS's IE and Blue Mountain Greeting Cards, in which IE either deleted and/or not delivered over 1.5 million internet greeting cards) Again, thank god, for Netscape -> Mozilla -> Firefox http://www.hechlerpianoandorgan.com/other/microsoft.html Duaine P.S. I use this webpage as a promotional for the use of Linux. (You can to - I would love it) -- Duaine Hechler Piano, Player Piano, Pump Organ Tuning, Servicing & Rebuilding Reed Organ Society Member Florissant, MO 63034 (314) 838-5587 dahechler@att.net www.hechlerpianoandorgan.com -- Home & Business user of Linux - 10 years -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On Saturday 13 of February 2010, John Heinen wrote:
How do I know when and how to defrag my computer with open suse , or don't I have to worry about it?
As demonstrated also in this thread, there is a widely accepted myth that defragmenting is completely useless with Linux, and as such nobody has been really bothered enough to write any reasonably usable generic tool. If you care about this enough to do some work, you can e.g. google for how to deframent some files that make Firefox startup slower, and similar issues. If you have the preload package installed, that one tries to also lay out better all the files that are needed during startup. But in general this is probably not worth the trouble anymore, especially given that SSDs now start to make this problem void. -- Lubos Lunak openSUSE Boosters team, KDE developer l.lunak@suse.cz , l.lunak@kde.org -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
Lubos Lunak said the following on 02/15/2010 05:03 AM:
On Saturday 13 of February 2010, John Heinen wrote:
How do I know when and how to defrag my computer with open suse , or don't I have to worry about it?
As demonstrated also in this thread, there is a widely accepted myth that defragmenting is completely useless with Linux, and as such nobody has been really bothered enough to write any reasonably usable generic tool.
Not quite correct. As I said, I wrote a defrag tool on SCO for the V7 file system back in the mod 1980s. When I ran metrics on the results I was disappointed. The improvement was negligible. Since then we have developed file systems that are less prone to fragmentation in the first place. Add to that denser, smaller drives (less head movement), better placement strategies, packing of small files, and much more. Like cheaper memory and better caching. Back in the 80's I couldn't price my defrag tool so that it was a better buy than a new disk drive. And the price of rotating magnetic media continues to fall. On top of that, as you point out, SSDs are coming along.
If you care about this enough to do some work, you can e.g. google for how to deframent some files that make Firefox startup slower, and similar issues.
That's a different issue. I posted that reference, and it was about a FILE not a file system, and a database file at that. I'd hardly call it the same kind of issue. Heck, you can speed up databases by adding the correct index files, and that has _nothing_ to do with defragmentation. -- Mary had a little key (It's all she could export), and all the email that she sent was opened at the Fort. -- Ron Rivest -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
Lubos Lunak wrote:
As demonstrated also in this thread, there is a widely accepted myth that defragmenting is completely useless with Linux, and as such nobody has been really bothered enough to write any reasonably usable generic tool.
Given that modern file systems are fragmentation resistant, please explain how fragmentation is a problem on Linux. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On Mon, Feb 15, 2010 at 11:15 AM, James Knott
Lubos Lunak wrote:
As demonstrated also in this thread, there is a widely accepted myth that defragmenting is completely useless with Linux, and as such nobody has been really bothered enough to write any reasonably usable generic tool.
Given that modern file systems are fragmentation resistant, please explain how fragmentation is a problem on Linux.
I don't have any stats to back it up, but xfs has had a defragger years and ext4 has a semi-released defragger. (The ext4 version is in the kernel and official userspace tarball, but is not fully tested / reviewed per Ted Tso the ext4 maintainer.) I don't know if and when fragmentation becomes bad enough to use these tools, but they would not exist if the answer was never. Greg -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
Greg Freemyer said the following on 02/15/2010 11:40 AM:
Given that modern file systems are fragmentation resistant, please explain how fragmentation is a problem on Linux.
I don't have any stats to back it up, but xfs has had a defragger years and ext4 has a semi-released defragger. (The ext4 version is in the kernel and official userspace tarball, but is not fully tested / reviewed per Ted Tso the ext4 maintainer.)
I don't know if and when fragmentation becomes bad enough to use these tools, but they would not exist if the answer was never.
let talk metrics: 1. What degree of file 'churn' is necessary before any fragmentation that might occur on a modern file system such as reiser, ext2/3/4, xfs. or even just the old berkeley model FFS get bad enough to impact performance? 2. What is the effort/time required to 'defrag' and how close does that come to a fully 'optimal' ile system layout, and by what criteria are you measuring optimality. As I mentioned, I did relevant measurements on the V7 file system abck in the 80's, wih drives that were a LOT slower than today. The V7 file system is very prone to churn causing framentation. I found out some interesting things. a) having separate system and user and tmp paritions helped more than trying to defrag having them all in one partition. b) most churn occured on /tmp. If you re-mkfs'd /tmp evey few days it did wonders. c) a faster drive outperformed an optimally (aka newly installed, no frag) old drive. Even after it had "churned" for a couuuule of months. d) more RAM allowed for more buffers and less swapping/paging and that meant less disk activity sooomore bandwidth was available for file activity. Since the mid 80s we've had, as I keep saying, much better file system design. The amount of fragmentation caused by file churn is way, way down. Does it happen? Yes. Does it matter? No. Saying that XFS and ext4 have 'in kernel defraggers' is not quite correct. You might as well say that Reiser has them too and even the old bekeley FFS had them - just using 'cylinder groups' kept fragmentation down. There in the kernel is code that re-packs small files into avilable space or occassionally moves blocks. Its not a new idea. I'd hardly call it a defragger if it is an intrinsic part of the way the FS works. As such, a defrag "product" or "tool" would take effort and give little benefit. I you want to call this a 'myth', then go ahead. The reality is that defrag belongs in Windows MS-DOS FS and other antiquated file systems. Fragmentation is more of a problem on FAT file systems than NTFS, I'm told, largely because the FAT file system is old and simplistic, dating back to the PDP-8. Run fsck on your file systems. It should tell you the degree of fragmentation. My root filesystem, after 3 years, upgraded from 11.0 to 11.1, regular updates, has only 2% non contiguous files. My /tmp FS, which gets cleaned out regularly though not mkfs'd, has less than that. In fact I've just stepped though running fsck on my various fs. The one that gets most churn apart from /tmp is the one I use for downloads. Downloads: 4902/66096 files (3.3% non-contiguous), 86176/264192 blocks That's a ext3 FS. None of my resierfs are above 0.5%. Myth? Call it that if you want. I'd call it good design. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
<snip>
Saying that XFS and ext4 have 'in kernel defraggers' is not quite correct. You might as well say that Reiser has them too and even the old bekeley FFS had them - just using 'cylinder groups' kept fragmentation down. There in the kernel is code that re-packs small files into avilable space or occassionally moves blocks. Its not a new idea. I'd hardly call it a defragger if it is an intrinsic part of the way the FS works.
As such, a defrag "product" or "tool" would take effort and give little benefit.
In the case of ext4, the "EXT4_IOC_MOVE_EXT" ioctl was written with the primary goal of supporting online defrag. That initially went into ext4 around 2.6.29 or 2.6.30. Akira Fujita (and team) was the author and I'm pretty sure his only goal was adding the necessary kernel support to allow ext4 defrag to work. The userspace tool to invoke it is e4defrag and Akira Fujita (and team) also wrote it. Not yet in OS 11.2 I don't believe, but I assume it will be part of 11.3. So the above kernel/userspace pair really is a ext4 defrag tool, not just the inherent way the kernel allocates blocks. xfs has a similar tool, xfs_fsr, which I know less about but I'm pretty sure it too is a true defrag tool and not just an inherent part of the xfs filesystem driver. fyi: As I separately posted, I'm part of a project that is using EXT4_IOC_MOVE_EXT for something other than defrag, but I follow the defrag effort because we need the underlying kernel features for our project. (OHSM). Greg -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On Monday 15 of February 2010, James Knott wrote:
Lubos Lunak wrote:
As demonstrated also in this thread, there is a widely accepted myth that defragmenting is completely useless with Linux, and as such nobody has been really bothered enough to write any reasonably usable generic tool.
Given that modern file systems are fragmentation resistant, please explain how fragmentation is a problem on Linux.
http://www.kdedevelopers.org/node/2270 -- Lubos Lunak openSUSE Boosters team, KDE developer l.lunak@suse.cz , l.lunak@kde.org -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
Lubos Lunak wrote:
On Monday 15 of February 2010, James Knott wrote:
Lubos Lunak wrote:
As demonstrated also in this thread, there is a widely accepted myth that defragmenting is completely useless with Linux, and as such nobody has been really bothered enough to write any reasonably usable generic tool.
Given that modern file systems are fragmentation resistant, please explain how fragmentation is a problem on Linux.
That article still doesn't explain why defragging helps on files that are not likely to be fragmented. Modern file systems try to write data in contiguous blocks. They do this by finding areas of the drive space that can hold the entire file (and then some) without fragmentation. As long as there is a reasonable amount of free space, this will generally happen. This means that there won't be many files for a defrag app to defrag. Defragging is necessary on FAT and to a lesser extent on NTFS because free space is used sequentially, in whatever block is available next, until the entire file is written. Even then, on those file systems, fragmentation can be reduced by going with larger cluster sizes. Also, a direct comparison of Windows vs Linux load times is meaningless, without going into the details of what gets loaded when. When Microsoft released XP, they claimed it loaded faster than W2000 etc. What they failed to mention was that while the desktop appeared faster, it was essentially useless for a period of time, until things settled down. By contrast, with the KDE desktop, you can click on something as soon as you can see it and the app will open eventually. Which is faster now??? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On Monday 15 February 2010 18:02:14 Lubos Lunak wrote:
On Monday 15 of February 2010, James Knott wrote:
Lubos Lunak wrote:
As demonstrated also in this thread, there is a widely accepted myth
that defragmenting is completely useless with Linux, and as such nobody has been really bothered enough to write any reasonably usable generic tool.
Given that modern file systems are fragmentation resistant, please explain how fragmentation is a problem on Linux.
I don't think there is a defragmentation tool in the world that knows which files are read together, and in which order that happens. Defragmentation invariably refers to files being discontiguous on the disk. Anything else is called "optimisation" and as far as I know, it requires human interaction, and knowledge of the process being optimised Anders -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
Anders Johansson said the following on 02/15/2010 02:04 PM:
On Monday 15 February 2010 18:02:14 Lubos Lunak wrote:
On Monday 15 of February 2010, James Knott wrote:
Lubos Lunak wrote:
As demonstrated also in this thread, there is a widely accepted myth
that defragmenting is completely useless with Linux, and as such nobody has been really bothered enough to write any reasonably usable generic tool.
Given that modern file systems are fragmentation resistant, please explain how fragmentation is a problem on Linux.
I don't think there is a defragmentation tool in the world that knows which files are read together, and in which order that happens. Defragmentation invariably refers to files being discontiguous on the disk. Anything else is called "optimisation" and as far as I know, it requires human interaction, and knowledge of the process being optimised
Correct. I recall seeing runs of the MS defrag tool, such as the one I ran on this laptop before shrinking the partition way, way down and putting Suse on the rest of the drive. it just packed everything down tight. Which is the wrong way to defrag. it means once a file is altered the defrag is 'broken'. A good strategy is to leave gaps for the files to expand into. But you never know how much the file may need to expand or which ones are going to expand when the automated process packs them down. Which gets to Ander's point about human guidance. Which may be wrong. I *think* that the system files shouldn't need alternation, but oops! there's a patch and oops! there's an upgrade. And oops that file in /home actually hasn't been changed for over 3 years ... it seems my strategy was wrong ... what a pity! Somehow I don't think that kind of defrag does what you think it does. So, back to the idea of things like cylinder groups and locality; if we can't guarantee contiguous blocks layout then at least lets keep head motion down. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Monday, 2010-02-15 at 19:25 -0500, Anton Aylward wrote: ...
Somehow I don't think that kind of defrag does what you think it does.
So, back to the idea of things like cylinder groups and locality; if we can't guarantee contiguous blocks layout then at least lets keep head motion down.
The strategy would then be to store a group of read requests, learn where the files are located, then plan an strategy to read all that with a good buffer, optimizing the head movements. Ie, do not read the first file directly, but wait for several requests needing to be served before planning the best strategy. Defragging is probably useless. At best, the kernel could store in a database what requests are usually grouped together and then optimize file locations. Actually, if a group of files is going to be read, it is probably best to interleave them, ie, fragmentating them on purpose. I think that what we need is better, faster hardware. Disks with independent heads, capable of reading several sectors at the same time, and having internal buffers in the hundreds of megabytes range. Unfragmenting is not going to be that useful on a multitasking os. - -- Cheers, Carlos E. R. -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.9 (GNU/Linux) iEYEARECAAYFAkt57CgACgkQtTMYHG2NR9XBxQCfbJiXJKA3nrE5hbCAHnKaSbRi RTwAn1EtFYC8ZG+YQlq4dDI186etxR28 =gN4R -----END PGP SIGNATURE----- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
Carlos E. R. said the following on 02/15/2010 07:51 PM:
On Monday, 2010-02-15 at 19:25 -0500, Anton Aylward wrote:
...
Somehow I don't think that kind of defrag does what you think it does.
So, back to the idea of things like cylinder groups and locality; if we can't guarantee contiguous blocks layout then at least lets keep head motion down.
The strategy would then be to store a group of read requests, learn where the files are located, then plan an strategy to read all that with a good buffer, optimizing the head movements.
What you've just described is the 'elevator strategy'. And its not quite that simple for a number of reasons. * You've got writes going on as well. * Some of those writes are writes of dirty pages which HAVE to be written out so as to free up the page so as to have a page to read into. * As I said in another post, a lot of time the system isn't reading a file, certainly not 'sequentially' as if the file were laid out as a contiguous block. Its reading a a mapped page of a file. This is the case for all code, programs and library, which make the the bulk of what gets read in. A moment's thought will make you realise that so long as the principle of locality holds, that the files - library and code - are close, head motion is going to be minimised. And its head motion that is the real delay. Track seek. Which is why the elevator algorithm matters so much. Your point about head motion scheduling, Carlos is VERY valid Back in the V7 days, swap-in/swap-out, no VM, I showed that by putting swap-out to the head of the queue, _certain_ types of interactive application would run much, much faster. But in the case of the shared server it didn't help. See your later note about multi-tasking.
Actually, if a group of files is going to be read, it is probably best to interleave them, ie, fragmentating them on purpose.
Good point! That's what the MKFS of the old V7 FS believed. Then churn broke it up. But on the system partition, where there was no churn, it seemed to work well.
I think that what we need is better, faster hardware. Disks with independent heads, capable of reading several sectors at the same time, and having internal buffers in the hundreds of megabytes range.
The idea of multiple heads, or even one head axis and the heads for each platter moving separately, is an old one. However manufacturers sty with the tried-and-true and get performance in other ways.
Unfragmenting is not going to be that useful on a multitasking os.
Indeed. I've seen studies that worked out the performance of just moving the head back and forth non-stop, regardless of what's in the queue, that show there is little degradation compared to optimal queueing. If you have enough requests, a big enough job pool, a big enough request pool, which is going to be the case for a heavily multi-tasked server, be it business or a shared service at an ISP, then defragmentation isn't going to help. Even if you could, it would be so rapidly churned that it would become meaningless. Better to have a fast disk, good file system design. -- The only secure computer is one that's unplugged, locked in a safe, and buried 20 feet under the ground in a secret location... and I'm not even too sure about that one" - Dennis Huges, FBI. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On Mon, 2010-02-15 at 18:02 +0100, Lubos Lunak wrote:
On Monday 15 of February 2010, James Knott wrote:
Lubos Lunak wrote:
As demonstrated also in this thread, there is a widely accepted myth that defragmenting is completely useless with Linux, and as such nobody has been really bothered enough to write any reasonably usable generic tool.
Given that modern file systems are fragmentation resistant, please explain how fragmentation is a problem on Linux.
"Ok, first of all, talking about defragmenting is actually wrong. Defragmenting is making sure no file is fragmented, i.e. that every file is just one contiguous area of the disk. But do you know any today's application that reads just one file? The thing that should be talked instead should be linearizing, i.e. making sure that related files (not one, files) are one contiguous area of the disk." But you get into linearization of files to be read, which to me seems like a different ball game all together, not to mention a logistic nightmare of a sort. If you have several heavily used apps that read 20 of the same files at startup, but the other 80 files are generally different, linearizing this would seem to either require multiple copies of the same files, or biasing the order due to the most frequent app used, or the most preferred app used. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
Lubos Lunak said the following on 02/15/2010 12:02 PM:
On Monday 15 of February 2010, James Knott wrote:
Lubos Lunak wrote:
As demonstrated also in this thread, there is a widely accepted myth that defragmenting is completely useless with Linux, and as such nobody has been really bothered enough to write any reasonably usable generic tool.
Given that modern file systems are fragmentation resistant, please explain how fragmentation is a problem on Linux.
Well that isn't a explanation. In fact the guy admits he's a KDE developer and not a kernel hacker. I was, though not any more. kernel hacker that is. Disk drivers and file systems were my focus back then. Different technology, UNIX, not Linux, but some principles still hold. One principle is that the old sequential file model of Dennis's UNIX hasn't held for a long time. Shortly after Virtual Memory came along everything got, well, "mapped". It doesn't matter what the file is, lirarby, executable, or data, it mapped into virtual memory. This makes the old V7 idea of code/data space adb disk buffer space meaningless. It also makes a lot of thigs faster 'cos there's no copying from "disk buffer" to the applciation data space if tyo write the code properly. You just 'map'. Along with all that VM we got right of the idea of fixed tables in kernel of the maximum number of open files and stuff like that. So now you can have a bit of code that opens every file KDE might need,becuase the real cost of a file isn't the fragmentation, its the resolving the name path, getting the i-node and and all the tables set up ready to read it. THAT is why KDE starts up slowly. Not reading in the libraries, opening all those files. If you want to improve the performance, then don't worry about defragging the file contents, the extents, make sure that the name caching works well and those i-nodes are clustered together. That's going to matter a lot more than whether the files are contiguous extents rather than merely being in the same cylinder group. Once all those files are opened, all those relevant *.so files, you do't need to do anything more. They are now mapped into virtual memory. If some application code refernces a linrary routine then that VM page is referenced. If its not in memory - page-fault. Like the old saw says, "Virtual memory means virtual performance". So add memory and as the pages fault they come in from disk. Opps! They get hit all over the place, so maybe a complete *.so doesn't all get read in at once, the whole extent. Heck, libc-2.9.so is 1419604 bytes and has over 1400 library functions. Its unreasonable to expect it to be read in to VM as a single extent. Just map it and page in what's needed. never mind where it on disk. That being said, its nice if the head doesn't have to move too much. That's what cylinder groups are for. Modern disks are such that a head can cover a few cylinder without having to move. Which leads to the idea that a few smaller spindles may be better for overall performance than one enormous dive. Well, OK, its not the old PDP-11/45 or a VAX where the disk IO can chain requests and a single controller can perform seeks on 7 other drives while doing a transfer from the one the seek has completed on. All without CPU intervention. So I wish you joy with your defragmentation code in XFS and ext4, and joy running your defragmenters. I'm sure you'll waste a lot, lot more time than you'll save. Me? I'll buy a faster disk every few years. Upgrade my server to do striping. Buy a laptop with a LOT more physical memory. Spend my time playing with the cats, sitting in the sun with a cool drink, talking with friends, cooking new dishes. -- The spirit of resistance to government is so valuable on certain occasions that I wish it to be always kept alive. It will often be exercised when wrong, but better so than not to be exercised at all. --Thomas Jefferson -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
Anton Aylward wrote:
Lubos Lunak said the following on 02/15/2010 12:02 PM:
On Monday 15 of February 2010, James Knott wrote:
Lubos Lunak wrote:
As demonstrated also in this thread, there is a widely accepted myth that defragmenting is completely useless with Linux, and as such nobody has been really bothered enough to write any reasonably usable generic tool.
Given that modern file systems are fragmentation resistant, please explain how fragmentation is a problem on Linux.
Well that isn't a explanation. In fact the guy admits he's a KDE developer and not a kernel hacker. I was, though not any more. kernel hacker that is. Disk drivers and file systems were my focus back then. Different technology, UNIX, not Linux, but some principles still hold.
The impression I got after reading his link is he was talking about grouping related files to minimize head movement, which is not the same as defragmentation. I have to wonder how he could be a developer and not know the difference. As for interleaving, I recall when that was necessary, because the computer couldn't keep up with disk data. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
James Knott said the following on 02/16/2010 10:01 AM:
Anton Aylward wrote:
Lubos Lunak said the following on 02/15/2010 12:02 PM:
Well that isn't a explanation. In fact the guy admits he's a KDE developer and not a kernel hacker. I was, though not any more. kernel hacker that is. Disk drivers and file systems were my focus back then. Different technology, UNIX, not Linux, but some principles still hold.
The impression I got after reading his link is he was talking about grouping related files to minimize head movement, which is not the same as defragmentation. I have to wonder how he could be a developer and not know the difference. As for interleaving, I recall when that was necessary, because the computer couldn't keep up with disk data.
While disk driver software may issue a 'seek' to a particular track, that doesn't mean the head moves, even if it had just done a seek and transfer from another track. Reality is heads are "wide" enough to "cover" a few adjacent tracks - or cylinders. Back at the beginning of the 1980s, Berkeley came out with the "Fast File System" as an alternative to the old V7 file system. It used this idea to put files in groups - "cylinder groups" - so as to, as you say, minimise head movement when accessing related files. However we are now running mapped Virtual Memory systems. Files, for the most part, are not 'read in', they are paged in. In parts. As page faults demand. The idea of "extents" and - as you said, interleaving for slower machines - made sense when we were running the "roll-in/roll-out" model of V7 UNIX. You had to "roll-in" ALL of the executable before you could begin executing it and had to "roll-out" ALL of it to swap. In that case putting the file in a contiguous extent made sense. We don't do that now. We "map the file". And the libraries. So a program is 'loaded'. Well, no, its mapped in. Switch to suer space and go to "__start()". Oops! Page fault - bring in that page. There was a time the smarts said bring in the next few as well, but lets face it, the first thing the program does is call its initialization code, which is way over there ... more page faults ... then the command line scanner ... over there. Contiguous files don't make as much sense as they once did. And while the program is suspended waiting for the faulted pages to come in some other code is running and generating its own disk IO requests. And LO! There aren't the free pages available for all this IO, so some have to be paged out ... and that depends on the queueing mechanisms, which have a BIG impact on performance and disk activity. And don't forget, the swap partition is way, way over there, lots of disk head movement. Hmm. Why not put it on a separate spindle. Or flash memory? Ridiculously, if a code page has just been used its less likely to be freed, but that page with the initialization code isn't going to be wanted again ... So a lot of the algorithms are ... well, sub optimal. Its a hard life! -- Most people are not really free. They are confined by the niche in the world that they carve out for themselves. They limit themselves to fewer possibilities by the narrowness of their vision. --V. S. Naipaul -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
James Knott wrote:
Anton Aylward wrote:
Lubos Lunak said the following on 02/15/2010 12:02 PM:
On Monday 15 of February 2010, James Knott wrote:
Lubos Lunak wrote:
As demonstrated also in this thread, there is a widely accepted myth that defragmenting is completely useless with Linux, and as such nobody has been really bothered enough to write any reasonably usable generic tool.
Given that modern file systems are fragmentation resistant, please explain how fragmentation is a problem on Linux.
Well that isn't a explanation. In fact the guy admits he's a KDE developer and not a kernel hacker. I was, though not any more. kernel hacker that is. Disk drivers and file systems were my focus back then. Different technology, UNIX, not Linux, but some principles still hold.
The impression I got after reading his link is he was talking about grouping related files to minimize head movement, which is not the same as defragmentation. I have to wonder how he could be a developer and not know the difference. As for interleaving, I recall when that was necessary, because the computer couldn't keep up with disk data.
Please do not take offense - but - maybe the developer is from India. The reason I say this, is because, in Nov, my wife lost her IT job, Business Analyst, to India. Thankfully, she found another job, in a smaller company, by mid-Dec. However, she found out that more layoffs were coming 1/1 and 1/16 - to - guess - where - INDIA. Duaine -- Duaine Hechler Piano, Player Piano, Pump Organ Tuning, Servicing & Rebuilding Reed Organ Society Member Florissant, MO 63034 (314) 838-5587 dahechler@att.net www.hechlerpianoandorgan.com -- Home & Business user of Linux - 10 years -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
Duaine Hechler wrote:
The impression I got after reading his link is he was talking about grouping related files to minimize head movement, which is not the same as defragmentation. I have to wonder how he could be a developer and not know the difference. As for interleaving, I recall when that was necessary, because the computer couldn't keep up with disk data.
Please do not take offense - but - maybe the developer is from India.
The reason I say this, is because, in Nov, my wife lost her IT job, Business Analyst, to India. Thankfully, she found another job, in a smaller company, by mid-Dec.
However, she found out that more layoffs were coming 1/1 and 1/16 - to - guess - where - INDIA.
Duaine
Still, a developer, no matter where they are, should know the difference. If not, I'd consider them incompetent, as this sort of thing is very basic knowledge. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
Duaine Hechler said the following on 02/16/2010 02:06 PM:
Please do not take offense - but - maybe the developer is from India.
Maybe, but how is the geographic location relevant to what the developer said - which really didn't have much to do with fragmentation anyway.
The reason I say this, is because, in Nov, my wife lost her IT job, Business Analyst, to India. Thankfully, she found another job, in a smaller company, by mid-Dec.
I'm sorry she had to go though that, but again I don't see how its relevant to how KDE behaves or how the file systems deal with layout and fragmentation. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On Tuesday 16 of February 2010, Duaine Hechler wrote:
James Knott wrote:
Anton Aylward wrote:
Lubos Lunak said the following on 02/15/2010 12:02 PM:
On Monday 15 of February 2010, James Knott wrote:
Lubos Lunak wrote:
As demonstrated also in this thread, there is a widely accepted myth that defragmenting is completely useless with Linux, and as such nobody has been really bothered enough to write any reasonably usable generic tool.
Given that modern file systems are fragmentation resistant, please explain how fragmentation is a problem on Linux.
Well that isn't a explanation. In fact the guy admits he's a KDE developer and not a kernel hacker. I was, though not any more. kernel hacker that is. Disk drivers and file systems were my focus back then. Different technology, UNIX, not Linux, but some principles still hold.
The impression I got after reading his link is he was talking about grouping related files to minimize head movement, which is not the same as defragmentation. I have to wonder how he could be a developer and not know the difference. As for interleaving, I recall when that was necessary, because the computer couldn't keep up with disk data.
Please do not take offense - but - maybe the developer is from India.
The reason I say this, is because, in Nov, my wife lost her IT job, Business Analyst, to India. Thankfully, she found another job, in a smaller company, by mid-Dec.
Gee, you could please stop this before this thread gets any worse? It got already quite pointless after about 5 mails, but this is really too much. If some people on this list feel they need to just chat, maybe there should be a separate opensuse-chat mailing list created. As I like getting straight to the facts, here are some facts for you: - The linked blog entry is from me. It can be easily recognized by it having my name in the header, just like this mail does. What makes somebody indulge in strange theories about the author's details instead of clicking the name in the blog to see the about page is beyond me. - Just because I'm not a kernel developer does not mean I'm clueless or even stupid. In fact, I usually provide evidence for facts presented in my posts, and the major factor of KDE startup time at the time of writing that was inefficient filesystem layout of data. - Just because you think filesystems are only about files does not mean it's the only thing that is read from the disk or that filesystem fragmentation is only about having each single file in a single continuous area. I explained that in the blog post. - This thread is leading nowhere anyway, as most people enjoying it seem to lack basic knowledge of facts anyway (nobody needs to care about filesystem layout today - xfs/ext4/Windows SuperFetch/ReadyBoost?) or apparently even the ability to read and understand text. - Finally, anybody's wife losing a job is no good reason to publicly offend people this way, even when that offense is hypocritically introduced with "please do not take offence". -- Lubos Lunak openSUSE Boosters team, KDE developer l.lunak@suse.cz , l.lunak@kde.org -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
Lubos Lunak said the following on 02/16/2010 05:16 PM:
- Just because I'm not a kernel developer does not mean I'm clueless or even stupid. In fact, I usually provide evidence for facts presented in my posts, and the major factor of KDE startup time at the time of writing that was inefficient filesystem layout of data.
Tell me, since I don't have a time machine, was it the case back when you wrote that in 2006 that KDM and KDE used the directives in /etc/preload/kdm and /etc/preload.d/kde ? I ask this because, as I've said, the overhead of opening a file is often a lot greater than reading the file, no matter how fragmented it is, because of the name resolution in order to access the i-node. In general, there's a good chance the file, no matter how fragmented it is, is in the same cylinder group. But its name segments could be all over the place. The idea behind the preload is that if the files are all opened and the name paths are in the name patch cache then the relevant applications will start faster. The cost of this is a long, long delay in the (first) initial start-up - which is what you describe in your article. I'm not saying your omission was in ignorance, back in 2006, since I don't have a time machine to go back and install - what was it, 10 point something? - and look to see if preloading was in use. Perhaps someone knows. But I do note you didn't mention preloading except as an aside towards the end, and even then in a rather deprecatory manner, and you seem to think that its a matter of 'caching' - well if you call the algorithm whereby pages are no longer in use are queued for release 'a cache', then yes, but as I've tried tried explain, its not really a cache like a buffer cache because file IO is no longer buffered, its mapped and 'load on demand'. You are quite correct when you say "But do you know any today's application that reads just one file?" Looking at /etc/profile.d/kde gives a good indication off what files KDE is going to use. You go on to say "The thing that should be talked instead should be linearizing, i.e. making sure that related files (not one, files) are one contiguous area of the disk." Again correct; that's what modern file systems, starting with the Berkeley Fast File System of early 1980 vintage, are about - "cylinder grouping" of files and putting the file data near the i-node. The contrast in head motion between the old V7 file system which had the i-node at the beginning and the data at the end, and the FFS, was dramatic. Heck, even the difference in head motion with the 'inverted V7 FS', where the layout went +===============+-----------+------------+==========================+ | system data | sys-inode | usr-inode | usr data | +===============+-----------+------------+==========================+ reduced head motion. However you do not mention the load-on-demand nature of virtual memory. Which is very important. Once the files are opened and mapped the VM takes over. As I said ... <quote> We "map the file". And the libraries. So a program is 'loaded'. Well, no, its mapped in. Switch to suer space and go to "__start()". Oops! Page fault - bring in that page. There was a time the smarts said bring in the next few as well, (http://en.wikipedia.org/wiki/Paging#Anticipatory_paging) but lets face it, the first thing the program does is call its initialization code, which is way over there ... more page faults ... then the command line scanner ... over there. </quote> (This is explained at http://en.wikipedia.org/wiki/Demand_paging#Unix_implementation <quote> The operating system maps the executable file (and its dependent libraries) into the newly created program's virtual address space, without actually allocating any physical RAM for the contents of those files. Since executable code is usually read-only and shared, the program literally runs from the page cache. </quote> So there is a fair bit of crazy-paging. And as I went on to say, the paging algorithm has a 'principle of locality' built in. So it tends to retain those initial pages even when they have run their one-time-only code. See http://en.wikipedia.org/wiki/Demand_paging and the various links from there for various views of what that is all about. A really, really good -hypothetical-application developer would have a really, really, good -hypothetical- VM system call at the end of such routines to tell the VM that there was no need to retain the page of code this routine was in. Ohh, look! -- But then I'm contaminated by having written for the VAX as well :-( I don't think you fully realise the why and wherefore of preload and the cost of file name resolution. Name-paths get cached, yes. Personally I think that the preloading and name caching adequately answers your question "why does second start-up of KDE need only roughly one quarter of time the first start-up needs?" I might also mention in passing that file systems which use a b-tree directory structure let the kernel do faster name lookup.
- Just because you think filesystems are only about files does not mean it's the only thing that is read from the disk or that filesystem fragmentation is only about having each single file in a single continuous area. I explained that in the blog post.
Indeed, its not "just" - as I keep saying, name lookup and opening the file, manipulation of i-nodes is a big issue. Which is why there is pre-loading. I don't think you explained that in the post. But then again that was 2006, this is now. There's also a follow-up: http://rudd-o.com/en/linux-and-free-software/about-boot-time-optimization-in... I must admit here that I'm confused: are we talking about boot-to-get-to-the-login-prompt, that is getting the kernel into memory, running init, running all the appropriate files in /etc/init.d, or are we talking about STARTUP of something like KDE **after login**.?? But fewer files? (Item #1) Eliminating the preload won't eliminate the need to open those files. It will just defer it. Maybe that will make some kinds of start-up feel snapper at the cost of 'pauses' later. But smaller files? (Item #2) Does that mean 64-bit code will take longer? Should we go back to 16-bit machines? Item #6 is really playing around with the Virtual memory and swapping/paging. There's one argument which says that code pages don't need to be paged-out, just freed, because they are code and can be paged in - again - when needed from the original file - which is mapped anyway. So the cost of writing code pages to disk doesn't make sense. The other argument revolves around somehow determining the working set and - which doesn't make sense in a heavily multi-tasked setting - and keeping it around as a contiguous block. Some history of the Linux VM is at http://www.usenix.org/event/usenix01/freenix01/full_papers/riel/riel_html/in... Tuning the VM http://www.cyberciti.biz/faq/tag/linux-swappiness/ http://www.cyberciti.biz/faq/linux-kernel-tuning-virtual-memory-subsystem/ -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Tuesday, 2010-02-16 at 18:59 -0500, Anton Aylward wrote:
I might also mention in passing that file systems which use a b-tree directory structure let the kernel do faster name lookup.
reiserfs? What we are talking about reminds me of databases. If something needs loading dozens of files, and there is a cost to finding them, then perhaps we are using the wrong tools. Or perhaps the strategy would be to write a single file compiling the contents of all those files (configuration or data). But then, there would be the need of detecting if one of those files has changed and the compendium has to be regenerated, and that would be costly if needed to be run on every start. So that would be a database :-)
Item #6 is really playing around with the Virtual memory and swapping/paging. There's one argument which says that code pages don't need to be paged-out, just freed, because they are code and can be paged in - again - when needed from the original file - which is mapped anyway. So the cost of writing code pages to disk doesn't make sense.
Actually, that is what windows code does. At least, so they said, back in 3.1. But, methinks, if the code needs to be re-read several times during the time, perhaps it would be faster to load instead from a pagefile/swapspace, which is contiguous by design.
The other argument revolves around somehow determining the working set and - which doesn't make sense in a heavily multi-tasked setting - and keeping it around as a contiguous block.
No, it doesn't. Other tasks may run in between and break the nicely pre-designed load sequence. - -- Cheers, Carlos E. R. -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.9 (GNU/Linux) iEYEARECAAYFAkt7QfYACgkQtTMYHG2NR9WIlQCdGbD9NA9Cp3cjkIvYna2cWshY uNcAoIInVownXXRH+U2p1g4AC85QWy7k =JMsa -----END PGP SIGNATURE----- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
Carlos E. R. said the following on 02/16/2010 08:10 PM:
On Tuesday, 2010-02-16 at 18:59 -0500, Anton Aylward wrote:
I might also mention in passing that file systems which use a b-tree directory structure let the kernel do faster name lookup.
reiserfs?
Among others ... :-)
What we are talking about reminds me of databases. If something needs loading dozens of files, and there is a cost to finding them, then perhaps we are using the wrong tools.
Or perhaps the strategy would be to write a single file compiling the contents of all those files (configuration or data). But then, there would be the need of detecting if one of those files has changed and the compendium has to be regenerated, and that would be costly if needed to be run on every start.
So that would be a database :-)
Not quite. Its a library module. All those separately compiled bits put in one file with an index. The trouble is that you don't want ALL the possible code in the one library - its get to be humongous and difficult to maintain ad regenerate, so you break it up into bits that are related - or not related depending on how you look at it. (e.g. gnome libraries vs kde libraries.) So, go look at what gets pre-loaded in /etc/preload.d/kde for example. Another way to do it is to zip (or tar.gz) up all those graphics files that make up the "themes" and decoration for KDE ...
Item #6 is really playing around with the Virtual memory and swapping/paging. There's one argument which says that code pages don't need to be paged-out, just freed, because they are code and can be paged in - again - when needed from the original file - which is mapped anyway. So the cost of writing code pages to disk doesn't make sense.
Actually, that is what windows code does.
Actually that's hat Linux does, as I understand it. And what most OS's do.
But, methinks, if the code needs to be re-read several times during the time, perhaps it would be faster to load instead from a pagefile/swapspace, which is contiguous by design.
It all depends. What are you reloading? And how much? Paging from the code files (see the references I gave) which are already memory mapped, isn't that expensive. This is paging, not roll-in/roll-out. After all, the file has already been opened, the kernel has the handle and inode and all that. Now if your system is so heavily loaded that you are in effect rolling out ALL the dirty pages of a process - in effect swapping it out - it means you are badly memory starved, and that's a problem to be solved by other means. Lets take a real simple example of VM. All pages of user space are in a queue. They get tagged as code or data, and get tagged if they are written to. Code never gets written to. Initially all pages are free. Init runs and loads a program. Well actually it opens the executable and maps it to virtual memory and jumps to the "known location' of "__start()". There's nothing there so we get a page fault to read that page in. Take a page off the front of the queue and map it. Run the code. Lather, rinse, repeat as more pages fault. But stack and heap have been allocated and they fault and get allocated as well. Eventually all the physical memory is used. Whenever a page is referenced, its moved back to the start of the queue. So the least used pages drift to the end. When a page is needed and the list is full, the "least recently used" page is take off the list and deallocated from its old map. Now if it was a code page, it wasn't written to, and we can get it back any time, just the way it came in. But if it was a 'data' page we need to make a copy on disk. That's what the swap area (or file) is for. That's how it began. You can find various explanations in more detail on the 'Net. Try Wikipedia or a start. There are a LOT of parameters you can play with in there. See /proc/sys/vm for some of the tunable parameters under Linux.
The other argument revolves around somehow determining the working set and - which doesn't make sense in a heavily multi-tasked setting - and keeping it around as a contiguous block.
No, it doesn't. Other tasks may run in between and break the nicely pre-designed load sequence.
No only that, but many of those blocks are shared. That's what the pre-loading is about. OMG! I've just looked in /etc/preload.d/kde and most of the entries don't exist! Think of all the wasted time trying to look them up! Time for a shell script to do some pruning. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Monday, 2010-02-15 at 11:15 -0500, James Knott wrote:
Lubos Lunak wrote:
As demonstrated also in this thread, there is a widely accepted myth that defragmenting is completely useless with Linux, and as such nobody has been really bothered enough to write any reasonably usable generic tool.
Given that modern file systems are fragmentation resistant, please explain how fragmentation is a problem on Linux.
I have an ext2/3 filesystem that is highly fragmented: nimrodel:~ # fsck /dev/sdb1 fsck 1.40.8 (13-Mar-2008) e2fsck 1.40.8 (13-Mar-2008) Moria_250 has been mounted 1574 times without being checked, check forced. Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 3A: Optimizing directories Pass 4: Checking reference counts Pass 5: Checking group summary information Moria_250: ***** FILE SYSTEM WAS MODIFIED ***** Moria_250: 5872/30408704 files (62.6% non-contiguous), 34630399/60791960 blocks nimrodel:~ # See that 62.6%? That's fragmentation. The "treatment" was to copy all of it somewhere else, reformat, and copy back. It's no use. I'm currently running e2fsck on that unit, then I will tell you the current figure. - -- Cheers, Carlos E. R. -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.9 (GNU/Linux) iEYEARECAAYFAkt5i1QACgkQtTMYHG2NR9VjagCfQdovZqfn0LyP0ZG4PdVaFE7e K2MAnjkiu8ObV7S2+AZGAqYhQMpJtPbM =qQFP -----END PGP SIGNATURE----- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
Carlos E. R. said the following on 02/15/2010 12:58 PM:
See that 62.6%? That's fragmentation.
How the <obscenity> did you manage that!!!
The "treatment" was to copy all of it somewhere else, reformat, and copy back. It's no use.
It wouldn't be. You need to copy out re mkfs the original copy back in Of course I use LVM so what I do is simply create a new LV, mkfs there, copy across (cpio or rsync) then remount the new LV where the old LV was. Sometime later I deallocate the old LV. Much faster. Saves one copy.
I'm currently running e2fsck on that unit, then I will tell you the current figure.
-- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Content-ID:
Carlos E. R. said the following on 02/15/2010 12:58 PM:
See that 62.6%? That's fragmentation.
How the <obscenity> did you manage that!!!
Dunno. It is not suse linux. cer@nimrodel:~> telnet moria ... - ----------------------------------------- Host: MORIA Version FW: Version SIESTA: 2.03.61 SIESTA-Lemmi-09 Actualización: 2009-11-13 21:58:37 - ----------------------------------------- ... [root@MORIA:~]# uname -a Linux MORIA 2.4.21-xfs #646 Wed Aug 3 10:01:46 CEST 2005 mips unknown [root@MORIA:~]# ls --help BusyBox v1.01 (2006.11.30-16:43+0000) multi-call binary ... It is a Digital TV receiver. It does time shift and recording in an external disk, via usb (or via samba over the network). By default it uses fat, but it happily accepts ext2, and actually runs faster. It does the recording in many small (80MB) files instead of one large file.
The "treatment" was to copy all of it somewhere else, reformat, and copy back. It's no use.
It wouldn't be. You need to
copy out re mkfs the original copy back in
Well, that's what I said and did.
I'm currently running e2fsck on that unit, then I will tell you the current figure.
Moria_250 has been mounted 258 times without being checked, check forced. ... Moria_250: 5450/30408704 files (56.8% non-contiguous), 40731138/60791960 blocks - -- Cheers, Carlos E. R. -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.9 (GNU/Linux) iEYEARECAAYFAkt5oMoACgkQtTMYHG2NR9UORACfV9KULBqZ0ZPRncq1SU0yQ02E m5EAn3AB9/Z7Xcaj7UVs6Moasx6TgXky =7WkX -----END PGP SIGNATURE-----
participants (10)
-
Anders Johansson
-
Anton Aylward
-
Carlos E. R.
-
Duaine Hechler
-
Felix Miata
-
Greg Freemyer
-
James Knott
-
John Heinen
-
Lubos Lunak
-
Mike McMullin