lkml.org 
[lkml]   [2006]   [May]   [26]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    Date
    From
    Subject[PATCH 22/33] readahead: initial method
    Aggressive readahead policy for read on start-of-file.

    Instead of selecting a conservative readahead size,
    it tries to do large readahead in the first place.

    However we have to watch on two cases:
    - do not ruin the hit rate for file-head-checkers
    - do not lead to thrashing for memory tight systems

    It benefits the many-small-files case:

    adaptive readahead: avg 10.3 seconds
    ====================================
    diff -r work/rxvt-unicode-7.7 /tmp/rxvt-unicode-7.7 0.10s user 0.32s system 4% cpu 10.211 total
    diff -r work/rxvt-unicode-7.7 /tmp/rxvt-unicode-7.7 0.09s user 0.31s system 3% cpu 10.389 total

    stock readahead: avg 12.3 seconds
    =================================
    diff -r work/rxvt-unicode-7.7 /tmp/rxvt-unicode-7.7 0.12s user 0.30s system 3% cpu 12.274 total
    diff -r work/rxvt-unicode-7.7 /tmp/rxvt-unicode-7.7 0.09s user 0.33s system 3% cpu 12.403 total

    The rxvt-unicode-7.7 being benchmarked is a dir with many .C/.h/.o files.

    Signed-off-by: Wu Fengguang <wfg@mail.ustc.edu.cn>
    ---

    mm/readahead.c | 37 +++++++++++++++++++++++++++++++++++++
    1 files changed, 37 insertions(+)

    --- linux-2.6.17-rc4-mm3.orig/mm/readahead.c
    +++ linux-2.6.17-rc4-mm3/mm/readahead.c
    @@ -1537,6 +1537,43 @@ try_context_based_readahead(struct addre
    }

    /*
    + * Read-ahead on start of file.
    + *
    + * We want to be as aggressive as possible, _and_
    + * - do not ruin the hit rate for file-head-peekers
    + * - do not lead to thrashing for memory tight systems
    + */
    +static unsigned long
    +initial_readahead(struct address_space *mapping, struct file *filp,
    + struct file_ra_state *ra, unsigned long req_size)
    +{
    + struct backing_dev_info *bdi = mapping->backing_dev_info;
    + unsigned long thrash_pages = bdi->ra_thrash_bytes >> PAGE_CACHE_SHIFT;
    + unsigned long expect_pages = bdi->ra_expect_bytes >> PAGE_CACHE_SHIFT;
    + unsigned long ra_size;
    + unsigned long la_size;
    +
    + ra_size = req_size;
    +
    + /* be aggressive if the system tends to read more */
    + if (ra_size < expect_pages)
    + ra_size = expect_pages;
    +
    + /* no read-ahead thrashing */
    + if (ra_size > thrash_pages)
    + ra_size = thrash_pages;
    +
    + /* do look-ahead on large(>= 32KB) read-ahead */
    + la_size = ra_size / LOOKAHEAD_RATIO;
    +
    + ra_set_class(ra, RA_CLASS_INITIAL);
    + ra_set_index(ra, 0, 0);
    + ra_set_size(ra, ra_size, la_size);
    +
    + return ra_dispatch(ra, mapping, filp);
    +}
    +
    +/*
    * ra_min is mainly determined by the size of cache memory. Reasonable?
    *
    * Table of concrete numbers for 4KB page size:
    --
    -
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

    \
     
     \ /
      Last update: 2006-05-26 13:56    [W:0.040 / U:0.124 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site