lkml.org 
[lkml]   [2009]   [Dec]   [9]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 2/2] cfq-iosched: Take care of corner cases of group losing share due to deletion
On Tue, Dec 08 2009, Vivek Goyal wrote:
> If there is a sequential reader running in a group, we wait for next request
> to come in that group after slice expiry and once new request is in, we expire
> the queue. Otherwise we delete the group from service tree and group looses
> its fair share.
>
> So far I was marking a queue as wait_busy if it had consumed its slice and
> it was last queue in the group. But this condition did not cover following
> two cases.
>
> 1.If a request completed and slice has not expired yet. Next request comes
> in and is dispatched to disk. Now select_queue() hits and slice has expired.
> This group will be deleted. Because request is still in the disk, this queue
> will never get a chance to wait_busy.
>
> 2.If request completed and slice has not expired yet. Before next request
> comes in (delay due to think time), select_queue() hits and expires the
> queue hence group. This queue never got a chance to wait busy.
>
> Gui was hitting the boundary condition 1 and not getting fairness numbers
> proportional to weight.
>
> This patch puts the checks for above two conditions and improves the fairness
> numbers for sequential workload on rotational media. Check in select_queue()
> takes care of case 1 and additional check in should_wait_busy() takes care
> of case 2.

I think this (and 1/2) look fine, just one minor comment:

> @@ -3250,6 +3264,36 @@ static void cfq_update_hw_tag(struct cfq_data *cfqd)
> cfqd->hw_tag = 0;
> }
>
> +static inline bool
> +cfq_should_wait_busy(struct cfq_data *cfqd, struct cfq_queue *cfqq)
> +{

That's too large to inline.

--
Jens Axboe



\
 
 \ /
  Last update: 2009-12-09 14:59    [W:0.052 / U:0.516 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site