lkml.org 
[lkml]   [2003]   [Feb]   [9]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: stochastic fair queueing in the elevator [Re: [BENCHMARK] 2.4.20-ck3 / aa / rmap with contest]
    On Sun, 9 Feb 2003, Andrea Arcangeli wrote:

    > The only way to get the minimal possible latency and maximal fariness is
    > my new stochastic fair queueing idea.

    "The only way" ? That sounds like a lack of fantasy ;))

    Having said that, I like the idea of using SFQ for fairness,
    since it seems to work really well for networking...
    I'll definately try such a patch.

    > The only other possible fix would be to reduce the I/O queue to say
    > 512kbytes and to take advantage of the FIFO behaviour of the queue
    > wakeups, I tried that, it works so well, you can trivially test it with
    > my elevator-lowlatency by just changing a line, but the problem is 512k
    > is a too small I/O pipeline, i.e. it is not enough to guarantee the
    > maximal throughput during contigous I/O.

    Maybe you want to count the I/O pipeline size in disk seeks
    and not in disk blocks ?

    In the time of one disk seek plus half rotational latency
    (12 ms) you can do a pretty large amount of reading (>400kB).
    This means that for near and medium disk seeks you don't care
    all that much about how large the submitted IO is. Track buffers
    further reduce this importance.

    OTOH, if you're seeking to the track next-door, or have mixed
    read and write operations on the same track, the seek time
    goes to near-zero and only the rotational latency counts. In
    that case the size of the IO does have an influence on the
    speed the request can be handled.

    > the stochastic fair queueing will also make the anticipatory scheduling
    > a very low priority to have. Stochasting fair queueing will be an order
    > of magnitude more important than anticipatory scheduling IMHO.

    On the contrary, once we have SFQ to fix the biggest elevator
    problems the difference made by the anticipatory scheduler should
    be much more visible.

    Think of a disk with 6 track buffers for reading and a system with
    10 active reader processes. Without the anticipatory scheduler you'd
    need to go to the platter for almost every OS read (because each
    process flushes out the track buffer for the others), while with the
    anticipatory scheduler you've got a bigger chance of having the data
    you want in one of the drive's track buffers, meaning that you don't
    need to go to the platter but can just do a silicon to silicon copy.

    If you look at the academic papers of an anticipatory scheduler, you'll
    find that it gives as much as a 73% increase in throughput.
    On real-world tasks, not even on specially contrived benchmarks.

    The only aspect of the anticipatory scheduler that is no longer needed
    with your SFQ idea is the distinction between reads and writes, since
    your idea already makes the (better, I guess) distinction between
    synchronous and asynchronous requests.

    regards,

    Rik
    --
    Bravely reimplemented by the knights who say "NIH".
    http://www.surriel.com/ http://guru.conectiva.com/
    Current spamtrap: <a href=mailto:"october@surriel.com">october@surriel.com</a>
    -
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

    \
     
     \ /
      Last update: 2005-03-22 13:33    [W:0.023 / U:30.396 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site