lkml.org 
[lkml]   [2009]   [Sep]   [9]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    SubjectRe: Regarding dm-ioband tests
    From
    Hi,

    Fabio Checconi <fchecconi@gmail.com> wrote:
    > Hi,
    >
    > > From: Rik van Riel <riel@redhat.com>
    > > Date: Tue, Sep 08, 2009 03:24:08PM -0400
    > >
    > > Ryo Tsuruta wrote:
    > > >Rik van Riel <riel@redhat.com> wrote:
    > >
    > > >>Are you saying that dm-ioband is purposely unfair,
    > > >>until a certain load level is reached?
    > > >
    > > >Not unfair, dm-ioband(weight policy) is intentionally designed to
    > > >use bandwidth efficiently, weight policy tries to give spare bandwidth
    > > >of inactive groups to active groups.
    > >
    > > This sounds good, except that the lack of anticipation
    > > means that a group with just one task doing reads will
    > > be considered "inactive" in-between reads.
    > >
    >
    > anticipation helps in achieving fairness, but CFQ currently disables
    > idling for nonrot+NCQ media, to avoid the resulting throughput loss on
    > some SSDs. Are we really sure that we want to introduce anticipation
    > everywhere, not only to improve throughput on rotational media, but to
    > achieve fairness too?

    I'm also not sure if it's worth introducing anticipation everywhere.
    The storage devices are becoming faster and smarter every year. In
    practice, I did a benchmark with a SAN storage and the noop scheduler
    got the best result.

    However, I'll consider how IO from one task should take care of.

    Thanks,
    Ryo Tsuruta


    \
     
     \ /
      Last update: 2009-09-09 11:27    [W:0.022 / U:0.520 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site