[lkml]   [2009]   [Sep]   [7]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
    SubjectRe: Regarding dm-ioband tests
    Hi Rik,

    Rik van Riel <> wrote:
    > Ryo Tsuruta wrote:
    > > However, if you want to get fairness in a case like this, a new
    > > bandwidth control policy which controls accurately according to
    > > assigned weights can be added to dm-ioband.
    > Are you saying that dm-ioband is purposely unfair,
    > until a certain load level is reached?

    Not unfair, dm-ioband(weight policy) is intentionally designed to
    use bandwidth efficiently, weight policy tries to give spare bandwidth
    of inactive groups to active groups.

    > > We regarded reducing throughput loss rather than reducing duration
    > > as the design of dm-ioband. Of course, it is possible to make a new
    > > policy which reduces duration.
    > ... while also reducing overall system throughput
    > by design?

    I think it reduces system throughput compared to the current
    implementation, because it causes more overhead to do fine grained

    > Why are you even bothering to submit this to the
    > linux-kernel mailing list, when there is a codebase
    > available that has no throughput or fairness regressions?
    > (Vivek's io scheduler based io controler)

    I think there are some advantages to dm-ioband. That's why I post
    dm-ioband to the mailing list.

    - dm-ioband supports not only proportional weight policy but also rate
    limiting policy. Besides, new policies can be added to dm-ioband if
    a user wants to control bandwidth by his or her own policy.
    - The dm-ioband driver can be replaced without stopping the system by
    using device-mapper's facility. It's easy to maintain.
    - dm-ioband can use without cgroup. (I remember Vivek said it's not an

    Ryo Tsuruta

     \ /
      Last update: 2009-09-08 05:03    [W:0.021 / U:39.636 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site