[lkml]   [2001]   [Aug]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
    SubjectRe: [resent PATCH] Re: very slow parallel read performance
    On Fri, 24 Aug 2001, Roger Larsson wrote:

    > I earlier questioned this too...
    > And I found out that read ahead was too short for modern disks.
    > This is a patch I did it does also enable the profiling, the only needed
    > line is the
    > -#define MAX_READAHEAD 31
    > +#define MAX_READAHEAD 511
    > I have not tried to push it further up since this resulted in virtually
    > equal total throughput for read two files than for read one.

    Note that this can have HORRIBLE effects if the total
    size of all the readahead windows combined doesn't fit
    in your memory.

    If you have 100 IO streams going on and you have space
    for 50 of them, you'll find yourself with 100 threads
    continuously pushing each other's read-ahead data out
    of RAM.

    100 threads may sound much, but 100 clients really isn't
    that special for an ftp server...

    This effect is made a lot worse with the use-once
    strategy used in recent Linus kernels because:

    1) under memory pressure, the inactive_dirty list is
    only as large as 1 second of pageout IO, meaning
    the sum of the readahead windows is smaller than
    with a kernel which doesn't do the use-once thing
    (eg. Alan's kernel)

    2) the drop-behind strategy makes it much more likely
    that we'll replace the data we already used, instead
    of the read-ahead data we haven't used yet ... this
    means the data we are about to use has a better chance
    to be in memory


    IA64: a worthy successor to the i860.

    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to
    More majordomo info at
    Please read the FAQ at

     \ /
      Last update: 2005-03-22 12:58    [W:0.023 / U:83.144 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site