lkml.org 
[lkml]   [2009]   [Sep]   [30]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
SubjectRe: IO scheduler based IO controller V10
From
Date
On Sun, 2009-09-27 at 18:42 +0200, Jens Axboe wrote:

> It's a given that not merging will provide better latency. We can't
> disable that or performance will suffer A LOT on some systems. There are
> ways to make it better, though. One would be to make the max request
> size smaller, but that would also hurt for streamed workloads. Can you
> try whether the below patch makes a difference? It will basically
> disallow merges to a request that isn't the last one.

Thoughts about something like the below?

The problem with the dd vs konsole -e exit type load seems to be
kjournald overloading the disk between reads. When userland is blocked,
kjournald is free to stuff 4*quantum into the queue instantly.

Taking the hint from Vivek's fairness tweakable patch, I stamped the
queue when a seeker was last seen, and disallowed overload within
CIC_SEEK_THR of that time. Worked well.

dd competing against perf stat -- konsole -e exec timings, 5 back to back runs
Avg
before 9.15 14.51 9.39 15.06 9.90 11.6
after 1.76 1.54 1.93 1.88 1.56 1.7

diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
index e2a9b92..4a00129 100644
--- a/block/cfq-iosched.c
+++ b/block/cfq-iosched.c
@@ -174,6 +174,8 @@ struct cfq_data {
unsigned int cfq_slice_async_rq;
unsigned int cfq_slice_idle;

+ unsigned long last_seeker;
+
struct list_head cic_list;

/*
@@ -1326,6 +1328,12 @@ static int cfq_dispatch_requests(struct request_queue *q, int force)
return 0;

/*
+ * We may have seeky queues, don't throttle up just yet.
+ */
+ if (time_before(jiffies, cfqd->last_seeker + CIC_SEEK_THR))
+ return 0;
+
+ /*
* we are the only queue, allow up to 4 times of 'quantum'
*/
if (cfqq->dispatched >= 4 * max_dispatch)
@@ -1941,7 +1949,7 @@ static void
cfq_update_idle_window(struct cfq_data *cfqd, struct cfq_queue *cfqq,
struct cfq_io_context *cic)
{
- int old_idle, enable_idle;
+ int old_idle, enable_idle, seeky = 0;

/*
* Don't idle for async or idle io prio class
@@ -1951,8 +1959,12 @@ cfq_update_idle_window(struct cfq_data *cfqd, struct cfq_queue *cfqq,

enable_idle = old_idle = cfq_cfqq_idle_window(cfqq);

- if (!atomic_read(&cic->ioc->nr_tasks) || !cfqd->cfq_slice_idle ||
- (cfqd->hw_tag && CIC_SEEKY(cic)))
+ if (cfqd->hw_tag && CIC_SEEKY(cic)) {
+ cfqd->last_seeker = jiffies;
+ seeky = 1;
+ }
+
+ if (!atomic_read(&cic->ioc->nr_tasks) || !cfqd->cfq_slice_idle || seeky)
enable_idle = 0;
else if (sample_valid(cic->ttime_samples)) {
if (cic->ttime_mean > cfqd->cfq_slice_idle)
@@ -2482,6 +2494,7 @@ static void *cfq_init_queue(struct request_queue *q)
cfqd->cfq_slice_async_rq = cfq_slice_async_rq;
cfqd->cfq_slice_idle = cfq_slice_idle;
cfqd->hw_tag = 1;
+ cfqd->last_seeker = jiffies;

return cfqd;
}



\
 
 \ /
  Last update: 2009-09-30 22:01    [W:0.165 / U:0.328 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site