lkml.org 
[lkml]   [2008]   [Nov]   [19]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [patch 0/4] [RFC] Another proportional weight IO controller
> From: Fabio Checconi <fchecconi@gmail.com>
> Date: Wed, Nov 19, 2008 11:17:01AM +0100
>
> > From: Aaron Carroll <aaronc@gelato.unsw.edu.au>
> > Date: Wed, Nov 19, 2008 12:52:42PM +1100
> >
> > Fabio Checconi wrote:
> > > - To detect hw tagging in BFQ we consider a sample valid iff the
> > > number of requests that the scheduler could have dispatched (given
> > > by cfqd->rb_queued + cfqd->rq_in_driver, i.e., the ones still into
> > > the scheduler plus the ones into the driver) is higher than the
> > > CFQ_HW_QUEUE_MIN threshold. This obviously caused no problems
> > > during testing, but the way CFQ uses now seems a little bit
> > > strange.
> >
> > BFQ's tag detection logic is broken in the same way that CFQ's used to
> > be. Explanation is in this patch:
> >
>
> If you look at bfq_update_hw_tag(), the logic introduced by the patch
> you mention is still there; BFQ starts with ->hw_tag = 1, and updates it
> every 32 valid samples. What changed WRT your patch, apart from the
> number of samples, is that the condition for a sample to be valid is:
>
> bfqd->rq_in_driver + bfqd->queued >= 5
>
> while in your patch it is:
>
> cfqd->rq_queued > 5 || cfqd->rq_in_driver > 5
>
> We preferred the first one because that sum better reflects the number
> of requests that could have been dispatched, and I don't think that this
> is wrong.
>
> There is a problem, but it's not within the tag detection logic itself.
> From some quick experiments, what happens is that when a process starts,
> CFQ considers it seeky (*), BFQ doesn't. As a side effect BFQ does not
> always dispatch enough requests to correctly detect tagging.
>
> At the first seek you cannot tell if the process is going to bee seeky
> or not, and we have chosen to consider it sequential because it improved
> fairness in some sequential workloads (the CIC_SEEKY heuristic is used
> also to determine the idle_window length in [bc]fq_arm_slice_timer()).
>
> Anyway, we're dealing with heuristics, and they tend to favor some
> workload over other ones. If recovering this thoughput loss is more
> important than a transient unfairness due to short idling windows assigned
> to sequential processes when they start, I've no problems in switching
> the CIC_SEEKY logic to consider a process seeky when it starts.
>
> Thank you for testing and for pointing out this issue, we missed it
> in our testing.
>
>
> (*) to be correct, the initial classification depends on the position
> of the first accessed sector.

Sorry, I forgot the patch... This seems to solve the problem with
your workload here, does it work for you?

[ The magic number would not appear in a definitive fix... ]


---
diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index 83e90e9..e9b010f 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -1322,10 +1322,12 @@ static void bfq_update_io_seektime(struct bfq_data *bfqd,

/*
* Don't allow the seek distance to get too large from the
- * odd fragment, pagein, etc.
+ * odd fragment, pagein, etc. The first request is not
+ * really a seek, but we consider a cic seeky on creation
+ * to make the hw_tag detection logic work better.
*/
- if (cic->seek_samples == 0) /* first request, not really a seek */
- sdist = 0;
+ if (cic->seek_samples == 0)
+ sdist = 8 * 1024 + 1;
else if (cic->seek_samples <= 60) /* second&third seek */
sdist = min(sdist, (cic->seek_mean * 4) + 2*1024*1024);
else

\
 
 \ /
  Last update: 2008-11-19 12:07    [W:1.528 / U:0.080 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site