lkml.org 
[lkml]   [2009]   [Oct]   [2]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: IO scheduler based IO controller V10
    On Fri, Oct 02 2009, Ingo Molnar wrote:
    >
    > * Jens Axboe <jens.axboe@oracle.com> wrote:
    >
    > > On Fri, Oct 02 2009, Ingo Molnar wrote:
    > > >
    > > > * Jens Axboe <jens.axboe@oracle.com> wrote:
    > > >
    > > > > It's not _that_ easy, it depends a lot on the access patterns. A
    > > > > good example of that is actually the idling that we already do.
    > > > > Say you have two applications, each starting up. If you start them
    > > > > both at the same time and just care for the dumb low latency, then
    > > > > you'll do one IO from each of them in turn. Latency will be good,
    > > > > but throughput will be aweful. And this means that in 20s they are
    > > > > both started, while with the slice idling and priority disk access
    > > > > that CFQ does, you'd hopefully have both up and running in 2s.
    > > > >
    > > > > So latency is good, definitely, but sometimes you have to worry
    > > > > about the bigger picture too. Latency is more than single IOs,
    > > > > it's often for complete operation which may involve lots of IOs.
    > > > > Single IO latency is a benchmark thing, it's not a real life
    > > > > issue. And that's where it becomes complex and not so black and
    > > > > white. Mike's test is a really good example of that.
    > > >
    > > > To the extent of you arguing that Mike's test is artificial (i'm not
    > > > sure you are arguing that) - Mike certainly did not do an artificial
    > > > test - he tested 'konsole' cache-cold startup latency, such as:
    > >
    > > [snip]
    > >
    > > I was saying the exact opposite, that Mike's test is a good example of
    > > a valid test. It's not measuring single IO latencies, it's doing a
    > > sequence of valid events and looking at the latency for those. It's
    > > benchmarking the bigger picture, not a microbenchmark.
    >
    > Good, so we are in violent agreement :-)

    Yes, perhaps that last sentence didn't provide enough evidence of which
    category I put Mike's test into :-)

    So to kick things off, I added an 'interactive' knob to CFQ and
    defaulted it to on, along with re-enabling slice idling for hardware
    that does tagged command queuing. This is almost completely identical to
    what Vivek Goyal originally posted, it's just combined into one and uses
    the term 'interactive' instead of 'fairness'. I think the former is a
    better umbrella under which to add further tweaks that may sacrifice
    throughput slightly, in the quest for better latency.

    It's queued up in the for-linus branch.

    --
    Jens Axboe



    \
     
     \ /
      Last update: 2009-10-02 19:39    [W:0.282 / U:1.916 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site