lkml.org 
[lkml]   [2000]   [Aug]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: ll_rw_blk.c fails to merge requests. Help!
On Wed, Aug 23 2000, Linus Torvalds wrote:
> Neil Brown <neilb@cse.unsw.edu.au> wrote:
> >
> >Not necessarily. When a request is freed, the oldest waiting thread
> >is woken, but it might not actually get to run before some other
> >thread steals the request. You could force a strict ordering if you
> >really wanted to, but I don't know how much it would help. See
> >STRICT_REQUEST_ORDERING in the patch below.
>
> Neil, I suspect the request ordering is secondary, and the real problem
> is that at some point we get into this awful steady state where we
> create new requests at the same pace as we get rid of old ones, and we
> always end up waiting for the next request to be free'd.

That is easy to do, just flood the queue and this will happen. I've
watched it happen.

> The "always end up waiting" thing means that we won't do a good job on
> read-ahead etc (because suddenly all our request stuff will be
> synchronous wrt the disk), so it _would_ impact performace. I think.
>
> Making the request ordering stricter won't help with this situation - it
> just makes it more fairly badly behaved. What _should_ help is to
> "batch" the freeing of requests, so that you don't end up waking up
> anybody (and everybody blocks on the requests being empty) until you've
> free'd up, say, half of the request queue again.

I did a quick patch doing just this -- and a just as quick dbench showed
some improvement when batching freeing of requests of 64 at the time:

128M RAM used


test7 stock (plus Neil's remerge-on-block patch, I liked it)

burns:/mnt # ./dbench 48
48 clients started
Throughput 20.224 MB/sec (NB=25.2799 MB/sec 202.24 MBit/sec)


test7 with QUEUE_NR_REQUESTS >> 2 batched frees (+ Neil's patch, of course)

burns:/mnt # ./dbench 48
48 clients started
Throughput 23.482 MB/sec (NB=29.3525 MB/sec 234.82 MBit/sec)


Patch attached, if anyone else wants to give it a go.

--
* Jens Axboe <axboe@suse.de>
* SuSE Labs
--- /opt/kernel/linux-2.4.0-test7/drivers/block/ll_rw_blk.c Thu Aug 24 05:12:54 2000
+++ drivers/block/ll_rw_blk.c Thu Aug 24 19:42:22 2000
@@ -423,6 +423,7 @@
INIT_LIST_HEAD(&q->queue_head);
INIT_LIST_HEAD(&q->request_freelist[READ]);
INIT_LIST_HEAD(&q->request_freelist[WRITE]);
+ INIT_LIST_HEAD(&q->pending_freelist);
elevator_init(&q->elevator, ELEVATOR_LINUS);
blk_init_free_list(q);
q->request_fn = rfn;
@@ -442,6 +443,7 @@
*/
q->plug_device_fn = generic_plug_device;
q->head_active = 1;
+ q->pending_free = 0;
}


@@ -494,9 +496,9 @@
register struct request *rq;
DECLARE_WAITQUEUE(wait, current);

- add_wait_queue_exclusive(&q->wait_for_request, &wait);
+ add_wait_queue(&q->wait_for_request, &wait);
for (;;) {
- __set_current_state(TASK_UNINTERRUPTIBLE | TASK_EXCLUSIVE);
+ __set_current_state(TASK_UNINTERRUPTIBLE);
spin_lock_irq(&io_request_lock);
rq = get_request(q, rw);
spin_unlock_irq(&io_request_lock);
@@ -604,20 +606,52 @@
(q->request_fn)(q);
}

+void inline blkdev_flush_pending_free(request_queue_t *q)
+{
+ struct list_head *p = &q->pending_freelist;
+ struct request *rq;
+ p = p->next;
+ do {
+ rq = list_entry(p, struct request, table);
+ p = p->next;
+ list_del(&rq->table);
+ list_add(&rq->table, rq->free_list);
+ } while (!list_empty(&q->pending_freelist));
+ q->pending_free = 0;
+}
+
/*
* Must be called with io_request_lock held and interrupts disabled
*/
void inline blkdev_release_request(struct request *req)
{
+ request_queue_t *q = req->q;
req->rq_status = RQ_INACTIVE;

/*
* Request may not have originated from ll_rw_blk
*/
- if (req->free_list) {
+ if (req->free_list == NULL)
+ return;
+
+ /*
+ * no one waiting for free requests
+ */
+ if (!waitqueue_active(&q->wait_for_request)) {
+ if (!list_empty(&q->pending_freelist))
+ blkdev_flush_pending_free(q);
list_add(&req->table, req->free_list);
req->free_list = NULL;
- wake_up(&req->q->wait_for_request);
+ return;
+ }
+
+ /*
+ * batch request freeing
+ */
+ list_add(&req->table, &q->pending_freelist);
+ if (++q->pending_free > QUEUE_NR_REQUESTS >> 2) {
+ blkdev_flush_pending_free(q);
+ wake_up(&q->wait_for_request);
}
}

--- /opt/kernel/linux-2.4.0-test7/include/linux/blkdev.h Thu Aug 10 14:28:06 2000
+++ include/linux/blkdev.h Thu Aug 24 18:44:56 2000
@@ -77,6 +77,7 @@
* the queue request freelist, one for reads and one for writes
*/
struct list_head request_freelist[2];
+ struct list_head pending_freelist;

/*
* Together with queue_head for cacheline sharing
@@ -122,6 +123,7 @@
* Tasks wait here for free request
*/
wait_queue_head_t wait_for_request;
+ int pending_free;
};

struct blk_dev_struct {
\
 
 \ /
  Last update: 2005-03-22 12:38    [W:0.103 / U:5.452 seconds]
©2003-2018 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site