Am Mittwoch 28 Januar 2009 schrieb Michael Meeks:
Hi Stephan,
On Mon, 2009-01-26 at 10:58 +0100, Stephan Kulow wrote:
* how does preload differ from sreadahead ?
Wow - thanks :-) nice comparison; so to re-frame it:
* similarities: + both pre-load only file data => reading the inodes, crawling the directory structures etc. is all done synchronously, without much parallelism or I/O sorting [ modulo sreadahead's 4x threads ]. Hmm, preload does stats too, didn't I say that?
* differences: + sreadahead lets boot continue while pre-fetching to interleave CPU/sleep-intensive loads [ eg. boot.udev+ in your chart ], preload instead defers the work so we get better seek behaviour on rotating media. Right. the eepc might not have the bad seek performance laptops have.
+ sreadahead only forces in the parts of the files we know are used, preload forces in the whole file - in practise this makes no difference [you assert]. sreadahead-pack has a -d switch that will tell you how much is in mincore and I get all 100%
+ there are several phases of preload, a single phase for sreadahead. Yes, and these are still too few. I'm working on preloadNG :)
So - preload allows you to re-generate when booting with preload ? that sounds pretty neat - how do you elide I/O caused by preload itself ? by process-id [ it seems the tools parse strace output to generate the preload lists ]. Yes, preload execs are out when looking at the pattern.
Reading the preload code, it looks rather nice :-) I guess my only concern is keeping the preload data itself up-to-date: apparently we don't ship it in SLED11, and eg. my /etc/preload.d/OpenOffice is obsolete. I know and I won't maintain these preload lists as they are. As you noticed yourself below, I work on a new idea.
As a crazy idea - do you think SSDs and Rotating media are converses - ie. if in an SSD world, it makes sense to run preload at a really low I/O priority; perhaps in a rotating world, it makes sense to run preload at an incredibly high priority [ while letting boot continue in parallel at the worlds lowest I/O prio ] ?
It wouldn't suprise me if there are SSDs that have a bank switching time and create a third class ;)
* do you take the moblin route: + of running preload asynchronously at the lowest I/O priority + of growing /sys/proc/sda/queue/nr_requests to 1024+ [ supposedly so the fairness code works ;-)
Doesn't change _anything_ - it defaults to 128 and the queue never gets that long. At least neither with sreadahead nor with preload. And yes, I tested (of course I used the correct the name - around /sys/devices/pci0000:00/0000:00:1f.2/host2/target2:0:0/2:0:0:0/block/sda/ queue/nr_requests,
Ah-well, some missing punctuation fluff, mine is: ;-)
/sys/devices/pci0000:00/0000:00:1f.2/host0/target0:0:0/0:0:0:0/block/sda/qu eue/nr_requests
Out of interest, how long does the queue get ?
No idea, but if it would be larger than 128, 1024 would have made a difference ;)
But nice anecdote.
It was an anecdote ? let me make it one : Arjan said this was a good idea, what should I know ? ;-)
Looking at your boot-chart; it seems you're using blktace to profile the first few preload runs, and stapio for the later ones, yet prepare_preload seems to work on strace output - is there a new way to prepare the preload output.
Don't get too close, you might burn your fingers ;) Greetings, Stephan -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org For additional commands, e-mail: opensuse-factory+help@opensuse.org