lkml.org 
[lkml]   [2008]   [Apr]   [25]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [RFC][PATCH 0/3] Skip I/O merges when disabled
>>
>> I'll look into retaining the one-hit cache merge functionality, remove
>> the errant elv_rqhas_del code, and repost w/ the results from the other
>> tests I've run.
>
> Also please do a check where you only disable the front merge logic, as
> that is the most expensive bit (and the least likely to occur). I would
> not be surprised if just removing the front merge bit would get you the
> majority of the gain already. I have in the past considered just getting
> rid of that bit, as it rarely triggers and it is a costly rbtree lookup
> for each IO. The back merge lookup+merge should be cheaper, it's just a
> hash lookup.
>

I have the results from leaving in just the one-hit cache merge
attempts, and started a run leaving in both that and the back-merge
rq_hash checks. (The patch below basically undoes patch 3/3 - putting
back in the addition of rqs onto the hash list, and moves the nomerges
check below the back merge attempts.)

We /could/ change the tunable to a dial (or a mask) - enabling/disabling
specific merge attempts, but that seems a bit confusing/complex.

Jens: What do you think?

Alan
From eb158393a5fd2eec0582bbba8af588be7e08ef32 Mon Sep 17 00:00:00 2001
From: Alan D. Brunelle <alan.brunelle@hp.com>
Date: Fri, 25 Apr 2008 07:14:42 -0400
Subject: [PATCH] Enables back-merge checks (and one-hit cache checks) for merges

Undoes patch 3/3 -- puts rqs onto the rq_hash list -- and performs simple
hash list checks for back-merges only.

Signed-off-by: Alan D. Brunelle <alan.brunelle@hp.com>
---
block/elevator.c | 8 ++++----
1 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/block/elevator.c b/block/elevator.c
index 557ee38..59be58d 100644
--- a/block/elevator.c
+++ b/block/elevator.c
@@ -488,9 +488,6 @@ int elv_merge(struct request_queue *q, struct request **req, struct bio *bio)
}
}

- if (blk_queue_nomerges(q))
- return ELEVATOR_NO_MERGE;
-
/*
* See if our hash lookup can find a potential backmerge.
*/
@@ -500,6 +497,9 @@ int elv_merge(struct request_queue *q, struct request **req, struct bio *bio)
return ELEVATOR_BACK_MERGE;
}

+ if (blk_queue_nomerges(q))
+ return ELEVATOR_NO_MERGE;
+
if (e->ops->elevator_merge_fn)
return e->ops->elevator_merge_fn(q, req, bio);

@@ -604,7 +604,7 @@ void elv_insert(struct request_queue *q, struct request *rq, int where)
BUG_ON(!blk_fs_request(rq));
rq->cmd_flags |= REQ_SORTED;
q->nr_sorted++;
- if (!blk_queue_nomerges(q) && rq_mergeable(rq)) {
+ if (rq_mergeable(rq)) {
elv_rqhash_add(q, rq);
if (!q->last_merge)
q->last_merge = rq;
--
1.5.2.5
\
 
 \ /
  Last update: 2008-04-25 13:19    [W:0.106 / U:0.384 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site