lkml.org 
[lkml]   [2009]   [Nov]   [9]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [RFC] Workload type Vs Groups (Was: Re: [PATCH 02/20] blkio: Change CFQ to use CFS like queue time stamps)
On Mon, Nov 09, 2009 at 10:47:48PM +0100, Corrado Zoccolo wrote:
> On Fri, Nov 6, 2009 at 11:22 PM, Vivek Goyal <vgoyal@redhat.com> wrote:
> > Hi All,
> >
> > I am now rebasing my patches to for-2.6.33 branch. There are significant
> > number of changes in that branch, especially changes from corrado bring
> > in an interesting question.
> >
> > Currently corrado has introduced the functinality of kind of grouping the
> > cfq queues based on workload type and gives the time slots to these sub
> > groups (sync-idle, sync-noidle, async).
> >
> > I was thinking of placing groups on top of this model, so that we select
> > the group first and then select the type of workload and then finally
> > the queue to run.
> >
> > Corrodo came up with an interesting suggestion (in a private mail), that
> > what if we implement workload type at top and divide the share among
> > groups with-in workoad type.
> >
> > So one would first select the workload to run and then select group
> > with-in workload and then cfq queue with-in group.
> >
> > The advantage of this approach are.
> >
> > - for sync-noidle group, we will not idle per group. We will idle only
> >  only at root level. (Well if we don't idle on the group once it becomes
> >  empty, we will not see fairness for group. So it will be fairness vs
> >  throughput call).
> >
> > - It allows us to limit system wide share of workload type. So for
> >  example, one can kind of fix system wide share of async queues.
> >  Generally it might not be very prudent to allocate a group 50% of
> >  disk share and then that group decides to just do async IO and sync
> >  IO in rest of the groups suffer.
> >
> > Disadvantage
> >
> > - The definition of fairness becomes bit murkier. Now fairness will be
> >  achieved for a group with-in the workload type. So if a group is doing
> >  IO of type sync-idle as well as sync-noidle and other group is doing
> >  IO of type only sync-noidle, then first group will get overall more
> >  disk time even if both the groups have same weight.
>
> The fairness definition was always debated (disk time vs data transferred).
> I think that the two have both some reason to exist.
> Namely, disk time is good for sync-idle workloads, like sequential readers,
> while data transferred is good for sync-noidle workloads, like random readers.

I thought it was reverse. For sync-noidle workoads (typically seeky), we
do lot less IO and size of IO is not the right measure otherwise most of
the disk time we will be giving to this sync-noidle queue/group and
sync-idle queues will be heavily punished in other groups.

time based fairness generally should work better on seeky media. As the
seek cost starts to come down, size of IO also starts making sense.

In fact on SSD, we do queue switching so fast and don't idle on the queue,
doing time accounting and providing fairness in terms of time is hard, for
the groups which are not continuously backlogged.

> Unfortunately, the two measures seems not comparable, so we seem
> obliged to schedule independently the two kinds of workloads.
> Actually, I think we can compute a feedback from each scheduling turn,
> that can be used to temporary alter weights in next turn, in order to
> reach long term fairness.

As one simple solution, I thought that on SSDs, one can think of using
higher level IO controlling policy instead of CFQ group scheduling.

Or, we bring in some measuer in CFQ for fairness based on size/amount of
IO.

Thanks
Vivek
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2009-11-10 00:15    [W:0.112 / U:1.548 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site