lkml.org 
[lkml]   [2018]   [Sep]   [8]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH 4.14 09/89] readahead: stricter check for bdi io_pages
    Date
    4.14-stable review patch.  If anyone has any objections, please let me know.

    ------------------

    From: Markus Stockhausen <stockhausen@collogia.de>

    commit dc30b96ab6d569060741572cf30517d3179429a8 upstream.

    ondemand_readahead() checks bdi->io_pages to cap the maximum pages
    that need to be processed. This works until the readit section. If
    we would do an async only readahead (async size = sync size) and
    target is at beginning of window we expand the pages by another
    get_next_ra_size() pages. Btrace for large reads shows that kernel
    always issues a doubled size read at the beginning of processing.
    Add an additional check for io_pages in the lower part of the func.
    The fix helps devices that hard limit bio pages and rely on proper
    handling of max_hw_read_sectors (e.g. older FusionIO cards). For
    that reason it could qualify for stable.

    Fixes: 9491ae4a ("mm: don't cap request size based on read-ahead setting")
    Cc: stable@vger.kernel.org
    Signed-off-by: Markus Stockhausen stockhausen@collogia.de
    Signed-off-by: Jens Axboe <axboe@kernel.dk>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

    ---
    mm/readahead.c | 12 ++++++++++--
    1 file changed, 10 insertions(+), 2 deletions(-)

    --- a/mm/readahead.c
    +++ b/mm/readahead.c
    @@ -380,6 +380,7 @@ ondemand_readahead(struct address_space
    {
    struct backing_dev_info *bdi = inode_to_bdi(mapping->host);
    unsigned long max_pages = ra->ra_pages;
    + unsigned long add_pages;
    pgoff_t prev_offset;

    /*
    @@ -469,10 +470,17 @@ readit:
    * Will this read hit the readahead marker made by itself?
    * If so, trigger the readahead marker hit now, and merge
    * the resulted next readahead window into the current one.
    + * Take care of maximum IO pages as above.
    */
    if (offset == ra->start && ra->size == ra->async_size) {
    - ra->async_size = get_next_ra_size(ra, max_pages);
    - ra->size += ra->async_size;
    + add_pages = get_next_ra_size(ra, max_pages);
    + if (ra->size + add_pages <= max_pages) {
    + ra->async_size = add_pages;
    + ra->size += add_pages;
    + } else {
    + ra->size = max_pages;
    + ra->async_size = max_pages >> 1;
    + }
    }

    return ra_submit(ra, mapping, filp);

    \
     
     \ /
      Last update: 2018-09-08 10:03    [W:2.416 / U:0.948 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site