[lkml]   [2000]   [Sep]   [13]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
    SubjectRe: (reiserfs) Re: More on 2.2.18pre2aa2
    On Wed, Sep 13, 2000 at 11:22:16AM -0400, Michael T. Babcock wrote:
    > If I may ask a potentially stupid question, how can request latency be
    > anything but a factor of time? Latency is how /long/ you (or the computer)
    > /waits/ for something. That defines it as a function of time.

    Latency is of course a factor of time, but the point is that the
    acceptable latency differs from device to device. For a slower device
    longer latency must be acceptable, and if the relationship is linear,
    then using number of requests may be a simpler and better way of doing

    Another potentially stupid question:
    When the queue gets too long/old, new requests should be put in a new
    queue to avoid starvation for the ones in the current queue, right?
    So if this is done by time, how do you know when the oldest request get
    too old? You would need to index the requests both by sector and time,
    and thus performance overhead, right?
    If you, however have a simple rule that max 100 requests should be put
    in each queue, it's easy to know when to start a new one. The number 100
    should be found by calculating how many requests can be served within
    the acceptable latency.

    Ragnar Kjørstad
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to
    Please read the FAQ at

     \ /
      Last update: 2005-03-22 12:38    [W:0.022 / U:18.488 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site