Hi! I have updated the packet writing patch for use with kernel 2.5.40. http://w1.894.telia.com/~u89404340/patches/packet/2.5/packet-2.5.40.patch.bz... There are no exciting new features, only adaptions to kernel changes. Although this version works, I get the feeling that it is slower than it used to be. It looks like the packet driver isn't fed enough simultaneous requests. Are there any special tricks I should use to make the deadline I/O scheduler work well with a stacking block driver? -- Peter Osterlund - petero2@telia.com http://w1.894.telia.com/~u89404340
On Sun, Oct 06 2002, Peter Osterlund wrote:
Hi!
I have updated the packet writing patch for use with kernel 2.5.40.
http://w1.894.telia.com/~u89404340/patches/packet/2.5/packet-2.5.40.patch.bz...
There are no exciting new features, only adaptions to kernel changes. Although this version works, I get the feeling that it is slower than it used to be. It looks like the packet driver isn't fed enough simultaneous requests. Are there any special tricks I should use to make the deadline I/O scheduler work well with a stacking block driver?
Maybe the loss of batching? That was recently adjusted down in the 2.5 kernels. The deadline scheduler itself should not give you any worse performance, in the general case. cd-rw has other costs that could be interesting to factor in. Things like switching from writing to reading, that comes with a cost as well. I _think_ the streamed i/o vs seeky i/o accounting should work fine for cds as well. If you can help me quantify what exactly is slower (or why it feels slower), then I can surely help you do a general solution for this. I would be inclined to say that you should just printk every request extracted from the io scheduler with the old version and with deadline and compare them. There just might be some obvious bug there. -- Jens Axboe
On Sun, 6 Oct 2002, Jens Axboe wrote:
On Sun, Oct 06 2002, Peter Osterlund wrote:
Hi!
I have updated the packet writing patch for use with kernel 2.5.40.
http://w1.894.telia.com/~u89404340/patches/packet/2.5/packet-2.5.40.patch.bz...
There are no exciting new features, only adaptions to kernel changes. Although this version works, I get the feeling that it is slower than it used to be. It looks like the packet driver isn't fed enough simultaneous requests. Are there any special tricks I should use to make the deadline I/O scheduler work well with a stacking block driver?
If you can help me quantify what exactly is slower (or why it feels slower), then I can surely help you do a general solution for this. I would be inclined to say that you should just printk every request extracted from the io scheduler with the old version and with deadline and compare them. There just might be some obvious bug there.
One thing I noticed is that the i/o scheduler sometimes feeds requests in reverse order to the cd. This may be OK for hard disks, but seems to be bad for cds, at least my cd. Is this supposed to happen with the new i/o scheduler? Oct 6 22:23:08 pengo kernel: usb-storage: 28 00 00 00 60 80 00 00 0f 00 00 00 Oct 6 22:23:08 pengo kernel: usb-storage: 2a 00 00 00 62 40 00 00 20 00 00 00 Oct 6 22:23:10 pengo kernel: usb-storage: 2a 00 00 00 62 20 00 00 20 00 00 00 Oct 6 22:23:11 pengo kernel: usb-storage: 2a 00 00 00 62 00 00 00 20 00 00 00 Oct 6 22:23:13 pengo kernel: usb-storage: 2a 00 00 00 61 e0 00 00 20 00 00 00 Oct 6 22:23:14 pengo kernel: usb-storage: 2a 00 00 00 61 c0 00 00 20 00 00 00 Oct 6 22:23:15 pengo kernel: usb-storage: 2a 00 00 00 61 a0 00 00 20 00 00 00 Oct 6 22:23:16 pengo kernel: usb-storage: 2a 00 00 00 61 80 00 00 20 00 00 00 Oct 6 22:23:18 pengo kernel: usb-storage: 2a 00 00 00 61 60 00 00 20 00 00 00 Oct 6 22:23:19 pengo kernel: usb-storage: 2a 00 00 00 61 40 00 00 20 00 00 00 Oct 6 22:23:20 pengo kernel: usb-storage: 2a 00 00 00 61 20 00 00 20 00 00 00 Oct 6 22:23:21 pengo kernel: usb-storage: 2a 00 00 00 61 00 00 00 20 00 00 00 Oct 6 22:23:23 pengo kernel: usb-storage: 2a 00 00 00 60 e0 00 00 20 00 00 00 Oct 6 22:23:24 pengo kernel: usb-storage: 2a 00 00 00 60 c0 00 00 20 00 00 00 Oct 6 22:23:25 pengo kernel: usb-storage: 2a 00 00 00 60 a0 00 00 20 00 00 00 Oct 6 22:23:26 pengo kernel: usb-storage: 2a 00 00 00 07 a0 00 00 20 00 00 00 Oct 6 22:23:28 pengo kernel: usb-storage: 2a 00 00 00 07 80 00 00 20 00 00 00 Oct 6 22:23:29 pengo kernel: usb-storage: 2a 00 00 00 07 60 00 00 20 00 00 00 Oct 6 22:23:30 pengo kernel: usb-storage: 2a 00 00 00 07 40 00 00 20 00 00 00 Oct 6 22:23:31 pengo kernel: usb-storage: 2a 00 00 00 07 20 00 00 20 00 00 00 Oct 6 22:23:33 pengo kernel: usb-storage: 2a 00 00 00 07 00 00 00 20 00 00 00 Oct 6 22:23:34 pengo kernel: usb-storage: 2a 00 00 00 06 e0 00 00 20 00 00 00 Oct 6 22:23:35 pengo kernel: usb-storage: 2a 00 00 00 06 c0 00 00 20 00 00 00 Oct 6 22:23:36 pengo kernel: usb-storage: 2a 00 00 00 06 a0 00 00 20 00 00 00 Oct 6 22:23:38 pengo kernel: usb-storage: 2a 00 00 00 06 80 00 00 20 00 00 00 Oct 6 22:23:39 pengo kernel: usb-storage: 2a 00 00 00 06 60 00 00 20 00 00 00 Oct 6 22:23:40 pengo kernel: usb-storage: 2a 00 00 00 06 40 00 00 20 00 00 00 Oct 6 22:23:41 pengo kernel: usb-storage: 2a 00 00 00 06 20 00 00 20 00 00 00 Oct 6 22:23:43 pengo kernel: usb-storage: 2a 00 00 00 06 00 00 00 20 00 00 00 Oct 6 22:23:44 pengo kernel: usb-storage: 2a 00 00 00 05 e0 00 00 20 00 00 00 Oct 6 22:23:45 pengo kernel: usb-storage: 2a 00 00 00 05 c0 00 00 20 00 00 00 Oct 6 22:23:46 pengo kernel: usb-storage: 2a 00 00 00 07 c0 00 00 20 00 00 00 Oct 6 22:23:47 pengo kernel: usb-storage: 2a 00 00 00 60 80 00 00 20 00 00 00 Oct 6 22:23:49 pengo kernel: usb-storage: 2a 00 00 00 62 60 00 00 20 00 00 00 Oct 6 22:23:50 pengo kernel: usb-storage: 2a 00 00 00 62 80 00 00 20 00 00 00 Oct 6 22:23:50 pengo kernel: usb-storage: 2a 00 00 00 62 a0 00 00 20 00 00 00 Oct 6 22:23:50 pengo kernel: usb-storage: 2a 00 00 00 62 c0 00 00 20 00 00 00 Oct 6 22:23:51 pengo kernel: usb-storage: 2a 00 00 00 62 e0 00 00 20 00 00 00 Oct 6 22:23:52 pengo kernel: usb-storage: 2a 00 00 00 63 00 00 00 20 00 00 00 Oct 6 22:23:53 pengo kernel: usb-storage: 2a 00 00 00 63 20 00 00 20 00 00 00 Oct 6 22:23:53 pengo kernel: usb-storage: 2a 00 00 00 63 40 00 00 20 00 00 00 Oct 6 22:23:54 pengo kernel: usb-storage: 2a 00 00 00 63 60 00 00 20 00 00 00 Oct 6 22:23:54 pengo kernel: usb-storage: 2a 00 00 00 63 80 00 00 20 00 00 00 Oct 6 22:23:55 pengo kernel: usb-storage: 2a 00 00 00 63 a0 00 00 20 00 00 00 Oct 6 22:23:56 pengo kernel: usb-storage: 2a 00 00 00 63 c0 00 00 20 00 00 00 Oct 6 22:23:56 pengo kernel: usb-storage: 2a 00 00 00 63 e0 00 00 20 00 00 00 Oct 6 22:23:57 pengo kernel: usb-storage: 2a 00 00 00 64 00 00 00 20 00 00 00 Oct 6 22:23:57 pengo kernel: usb-storage: 2a 00 00 00 64 20 00 00 20 00 00 00 Oct 6 22:23:58 pengo kernel: usb-storage: 2a 00 00 00 64 40 00 00 20 00 00 00 Oct 6 22:23:59 pengo kernel: usb-storage: 2a 00 00 00 64 60 00 00 20 00 00 00 Oct 6 22:23:59 pengo kernel: usb-storage: 2a 00 00 00 64 80 00 00 20 00 00 00 Oct 6 22:24:00 pengo kernel: usb-storage: 2a 00 00 00 64 a0 00 00 20 00 00 00 Oct 6 22:24:00 pengo kernel: usb-storage: 2a 00 00 00 64 c0 00 00 20 00 00 00 Oct 6 22:24:00 pengo kernel: usb-storage: 2a 00 00 00 64 e0 00 00 20 00 00 00 Oct 6 22:24:01 pengo kernel: usb-storage: 2a 00 00 00 65 00 00 00 20 00 00 00 Oct 6 22:24:02 pengo kernel: usb-storage: 2a 00 00 00 65 20 00 00 20 00 00 00 Oct 6 22:24:03 pengo kernel: usb-storage: 2a 00 00 00 65 40 00 00 20 00 00 00 Oct 6 22:24:03 pengo kernel: usb-storage: 2a 00 00 00 65 60 00 00 20 00 00 00 Oct 6 22:24:03 pengo kernel: usb-storage: 2a 00 00 00 65 80 00 00 20 00 00 00 Oct 6 22:24:04 pengo kernel: usb-storage: 2a 00 00 00 65 a0 00 00 20 00 00 00 Oct 6 22:24:05 pengo kernel: usb-storage: 2a 00 00 00 65 c0 00 00 20 00 00 00 Oct 6 22:24:06 pengo kernel: usb-storage: 2a 00 00 00 65 e0 00 00 20 00 00 00 Oct 6 22:24:06 pengo kernel: usb-storage: 2a 00 00 00 66 00 00 00 20 00 00 00 -- Peter Osterlund - petero2@telia.com http://w1.894.telia.com/~u89404340
On Sun, Oct 06 2002, Peter Osterlund wrote:
On Sun, 6 Oct 2002, Jens Axboe wrote:
On Sun, Oct 06 2002, Peter Osterlund wrote:
Hi!
I have updated the packet writing patch for use with kernel 2.5.40.
http://w1.894.telia.com/~u89404340/patches/packet/2.5/packet-2.5.40.patch.bz...
There are no exciting new features, only adaptions to kernel changes. Although this version works, I get the feeling that it is slower than it used to be. It looks like the packet driver isn't fed enough simultaneous requests. Are there any special tricks I should use to make the deadline I/O scheduler work well with a stacking block driver?
If you can help me quantify what exactly is slower (or why it feels slower), then I can surely help you do a general solution for this. I would be inclined to say that you should just printk every request extracted from the io scheduler with the old version and with deadline and compare them. There just might be some obvious bug there.
One thing I noticed is that the i/o scheduler sometimes feeds requests in reverse order to the cd. This may be OK for hard disks, but seems to be bad for cds, at least my cd. Is this supposed to happen with the new i/o scheduler?
No, that would be a bug regardless. Hmm strange, will try some profiling myself. -- Jens Axboe
On Mon, Oct 07 2002, Jens Axboe wrote:
On Sun, Oct 06 2002, Peter Osterlund wrote:
On Sun, 6 Oct 2002, Jens Axboe wrote:
On Sun, Oct 06 2002, Peter Osterlund wrote:
Hi!
I have updated the packet writing patch for use with kernel 2.5.40.
http://w1.894.telia.com/~u89404340/patches/packet/2.5/packet-2.5.40.patch.bz...
There are no exciting new features, only adaptions to kernel changes. Although this version works, I get the feeling that it is slower than it used to be. It looks like the packet driver isn't fed enough simultaneous requests. Are there any special tricks I should use to make the deadline I/O scheduler work well with a stacking block driver?
If you can help me quantify what exactly is slower (or why it feels slower), then I can surely help you do a general solution for this. I would be inclined to say that you should just printk every request extracted from the io scheduler with the old version and with deadline and compare them. There just might be some obvious bug there.
One thing I noticed is that the i/o scheduler sometimes feeds requests in reverse order to the cd. This may be OK for hard disks, but seems to be bad for cds, at least my cd. Is this supposed to happen with the new i/o scheduler?
No, that would be a bug regardless. Hmm strange, will try some profiling myself.
BTW, just curious, does the reverse order requests go away if you disable: /* * no insertion point found, check the very front */ if (!*insert && !list_empty(sort_list)) { __rq = list_entry_rq(sort_list->next); if (bio->bi_sector + bio_sectors(bio) < __rq->sector && bio->bi_sector > deadline_get_last_sector(dd)) *insert = sort_list; } in drivers/block/deadline-iosched.c? Just #if 0 out the entire block above. -- Jens Axboe
On Mon, 7 Oct 2002, Jens Axboe wrote:
On Mon, Oct 07 2002, Jens Axboe wrote:
On Sun, Oct 06 2002, Peter Osterlund wrote:
On Sun, 6 Oct 2002, Jens Axboe wrote:
On Sun, Oct 06 2002, Peter Osterlund wrote:
It looks like the packet driver isn't fed enough simultaneous requests.
This was a false alarm. The code in pkt_proc_device that prints request queue information just doesn't provide the whole truth when the deadline scheduler is used.
One thing I noticed is that the i/o scheduler sometimes feeds requests in reverse order to the cd. This may be OK for hard disks, but seems to be bad for cds, at least my cd. Is this supposed to happen with the new i/o scheduler?
No, that would be a bug regardless. Hmm strange, will try some profiling myself.
BTW, just curious, does the reverse order requests go away if you disable:
/* * no insertion point found, check the very front */ if (!*insert && !list_empty(sort_list)) { __rq = list_entry_rq(sort_list->next);
if (bio->bi_sector + bio_sectors(bio) < __rq->sector && bio->bi_sector > deadline_get_last_sector(dd)) *insert = sort_list; }
in drivers/block/deadline-iosched.c? Just #if 0 out the entire block above.
No, it doesn't make a difference. It's very repeatable by simply creating an empty 10Mb file with dd. I have configured the packet driver to work on 32 simultaneous packets, and it immediately sends 30+ write requests to the cdrw request queue and then keeps the request queue busy. However, the first 30+ requests appear in reverse order. After that the requests appear in standard order. I think I'll go study the deadline i/o scheduler code in detail. -- Peter Osterlund - petero2@telia.com http://w1.894.telia.com/~u89404340
On Mon, Oct 07 2002, Peter Osterlund wrote:
On Mon, 7 Oct 2002, Jens Axboe wrote:
On Mon, Oct 07 2002, Jens Axboe wrote:
On Sun, Oct 06 2002, Peter Osterlund wrote:
On Sun, 6 Oct 2002, Jens Axboe wrote:
On Sun, Oct 06 2002, Peter Osterlund wrote:
It looks like the packet driver isn't fed enough simultaneous requests.
This was a false alarm. The code in pkt_proc_device that prints request queue information just doesn't provide the whole truth when the deadline scheduler is used.
Counting queue_head entries only?
One thing I noticed is that the i/o scheduler sometimes feeds requests in reverse order to the cd. This may be OK for hard disks, but seems to be bad for cds, at least my cd. Is this supposed to happen with the new i/o scheduler?
No, that would be a bug regardless. Hmm strange, will try some profiling myself.
BTW, just curious, does the reverse order requests go away if you disable:
/* * no insertion point found, check the very front */ if (!*insert && !list_empty(sort_list)) { __rq = list_entry_rq(sort_list->next);
if (bio->bi_sector + bio_sectors(bio) < __rq->sector && bio->bi_sector > deadline_get_last_sector(dd)) *insert = sort_list; }
in drivers/block/deadline-iosched.c? Just #if 0 out the entire block above.
No, it doesn't make a difference. It's very repeatable by simply creating an empty 10Mb file with dd. I have configured the packet driver to work on 32 simultaneous packets, and it immediately sends 30+ write requests to the cdrw request queue and then keeps the request queue busy. However, the first 30+ requests appear in reverse order. After that the requests appear in standard order.
Ok, nice little repeatable case, should be easy to fix then.
I think I'll go study the deadline i/o scheduler code in detail.
Knock yourself out :). I'll get on this first thing tomorrow. -- Jens Axboe
On Mon, 7 Oct 2002, Jens Axboe wrote:
On Mon, Oct 07 2002, Peter Osterlund wrote:
This was a false alarm. The code in pkt_proc_device that prints request queue information just doesn't provide the whole truth when the deadline scheduler is used.
Counting queue_head entries only?
Yes.
I think I'll go study the deadline i/o scheduler code in detail.
Knock yourself out :). I'll get on this first thing tomorrow.
I don't think this is related to my problem, but the bio_rq_in_between function seems to assume that bio and rq are for the same device, but deadline_merge calls that function without making sure that is the case. Can't this make the insert point wrong? A request for (device, sector) = (1, 50) can be inserted between (0, 10) and (0, 100). -- Peter Osterlund - petero2@telia.com http://w1.894.telia.com/~u89404340
On Tue, Oct 08 2002, Peter Osterlund wrote:
I think I'll go study the deadline i/o scheduler code in detail.
Knock yourself out :). I'll get on this first thing tomorrow.
I don't think this is related to my problem, but the bio_rq_in_between function seems to assume that bio and rq are for the same device, but deadline_merge calls that function without making sure that is the case. Can't this make the insert point wrong?
The first thing to note is that the entries on the queue really _should_ be for the same device. In 2.4 and before we could have stuff like "sector 128 on partition 3" and "sector 0 on partition 4" on the same queue, but now each bio is partition mapped before being attached to a request. So bio_rq_in_between() does not assume that all requests are for the same device, but it uses logic that assumes 2.4 behaviour (or, makes most sense there), namely that in the above case sector 0 on partition 4 is assumed to be bigger than last sector on partition 3.
A request for (device, sector) = (1, 50) can be inserted between (0, 10) and (0, 100).
I'll take a look at the 2.5 pktcdvd code, but iirc (and you didn't change that, I doubt it) I made the queues per packet device. This is the way it must be, and the above will not cause any problems. Basically, if you have more than one spindle per queue you are screwed, the io scheduler cannot make good decisions. -- Jens Axboe
participants (2)
-
Jens Axboe
-
Peter Osterlund