Carlos E. R. said the following on 02/15/2010 07:51 PM:
On Monday, 2010-02-15 at 19:25 -0500, Anton Aylward wrote:
...
Somehow I don't think that kind of defrag does what you think it does.
So, back to the idea of things like cylinder groups and locality; if we can't guarantee contiguous blocks layout then at least lets keep head motion down.
The strategy would then be to store a group of read requests, learn where the files are located, then plan an strategy to read all that with a good buffer, optimizing the head movements.
What you've just described is the 'elevator strategy'. And its not quite that simple for a number of reasons. * You've got writes going on as well. * Some of those writes are writes of dirty pages which HAVE to be written out so as to free up the page so as to have a page to read into. * As I said in another post, a lot of time the system isn't reading a file, certainly not 'sequentially' as if the file were laid out as a contiguous block. Its reading a a mapped page of a file. This is the case for all code, programs and library, which make the the bulk of what gets read in. A moment's thought will make you realise that so long as the principle of locality holds, that the files - library and code - are close, head motion is going to be minimised. And its head motion that is the real delay. Track seek. Which is why the elevator algorithm matters so much. Your point about head motion scheduling, Carlos is VERY valid Back in the V7 days, swap-in/swap-out, no VM, I showed that by putting swap-out to the head of the queue, _certain_ types of interactive application would run much, much faster. But in the case of the shared server it didn't help. See your later note about multi-tasking.
Actually, if a group of files is going to be read, it is probably best to interleave them, ie, fragmentating them on purpose.
Good point! That's what the MKFS of the old V7 FS believed. Then churn broke it up. But on the system partition, where there was no churn, it seemed to work well.
I think that what we need is better, faster hardware. Disks with independent heads, capable of reading several sectors at the same time, and having internal buffers in the hundreds of megabytes range.
The idea of multiple heads, or even one head axis and the heads for each platter moving separately, is an old one. However manufacturers sty with the tried-and-true and get performance in other ways.
Unfragmenting is not going to be that useful on a multitasking os.
Indeed. I've seen studies that worked out the performance of just moving the head back and forth non-stop, regardless of what's in the queue, that show there is little degradation compared to optimal queueing. If you have enough requests, a big enough job pool, a big enough request pool, which is going to be the case for a heavily multi-tasked server, be it business or a shared service at an ISP, then defragmentation isn't going to help. Even if you could, it would be so rapidly churned that it would become meaningless. Better to have a fast disk, good file system design. -- The only secure computer is one that's unplugged, locked in a safe, and buried 20 feet under the ground in a secret location... and I'm not even too sure about that one" - Dennis Huges, FBI. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org