lkml.org 
[lkml]   [2011]   [Jun]   [1]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    Date
    From
    Subject[090/146] mm: vmscan: correctly check if reclaimer should schedule
    2.6.38-stable review patch.  If anyone has any objections, please let us know.

    ------------------
    during shrink_slab

    From: Minchan Kim <minchan.kim@gmail.com>

    commit f06590bd718ed950c98828e30ef93204028f3210 upstream.

    It has been reported on some laptops that kswapd is consuming large
    amounts of CPU and not being scheduled when SLUB is enabled during large
    amounts of file copying. It is expected that this is due to kswapd
    missing every cond_resched() point because;

    shrink_page_list() calls cond_resched() if inactive pages were isolated
    which in turn may not happen if all_unreclaimable is set in
    shrink_zones(). If for whatver reason, all_unreclaimable is
    set on all zones, we can miss calling cond_resched().

    balance_pgdat() only calls cond_resched if the zones are not
    balanced. For a high-order allocation that is balanced, it
    checks order-0 again. During that window, order-0 might have
    become unbalanced so it loops again for order-0 and returns
    that it was reclaiming for order-0 to kswapd(). It can then
    find that a caller has rewoken kswapd for a high-order and
    re-enters balance_pgdat() without ever calling cond_resched().

    shrink_slab only calls cond_resched() if we are reclaiming slab
    pages. If there are a large number of direct reclaimers, the
    shrinker_rwsem can be contended and prevent kswapd calling
    cond_resched().

    This patch modifies the shrink_slab() case. If the semaphore is
    contended, the caller will still check cond_resched(). After each
    successful call into a shrinker, the check for cond_resched() remains in
    case one shrinker is particularly slow.

    [mgorman@suse.de: preserve call to cond_resched after each call into shrinker]
    Signed-off-by: Mel Gorman <mgorman@suse.de>
    Signed-off-by: Minchan Kim <minchan.kim@gmail.com>
    Cc: Rik van Riel <riel@redhat.com>
    Cc: Johannes Weiner <hannes@cmpxchg.org>
    Cc: Wu Fengguang <fengguang.wu@intel.com>
    Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
    Tested-by: Colin King <colin.king@canonical.com>
    Cc: Raghavendra D Prabhu <raghu.prabhu13@gmail.com>
    Cc: Jan Kara <jack@suse.cz>
    Cc: Chris Mason <chris.mason@oracle.com>
    Cc: Christoph Lameter <cl@linux.com>
    Cc: Pekka Enberg <penberg@kernel.org>
    Cc: Rik van Riel <riel@redhat.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>

    ---
    mm/vmscan.c | 9 +++++++--
    1 file changed, 7 insertions(+), 2 deletions(-)

    --- a/mm/vmscan.c
    +++ b/mm/vmscan.c
    @@ -230,8 +230,11 @@ unsigned long shrink_slab(unsigned long
    if (scanned == 0)
    scanned = SWAP_CLUSTER_MAX;

    - if (!down_read_trylock(&shrinker_rwsem))
    - return 1; /* Assume we'll be able to shrink next time */
    + if (!down_read_trylock(&shrinker_rwsem)) {
    + /* Assume we'll be able to shrink next time */
    + ret = 1;
    + goto out;
    + }

    list_for_each_entry(shrinker, &shrinker_list, list) {
    unsigned long long delta;
    @@ -282,6 +285,8 @@ unsigned long shrink_slab(unsigned long
    shrinker->nr += total_scan;
    }
    up_read(&shrinker_rwsem);
    +out:
    + cond_resched();
    return ret;
    }




    \
     
     \ /
      Last update: 2011-06-01 11:17    [W:4.284 / U:0.052 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site