lkml.org 
[lkml]   [2008]   [Oct]   [30]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    Date
    From
    Subject[patch 5/7] mm: throttle writeout with cpuset awareness
    From: Christoph Lameter <cl@linux-foundation.org>

    This bases the VM throttling from the reclaim path on the dirty ratio of
    the cpuset. Note that a cpuset is only effective if shrink_zone is called
    from direct reclaim.

    kswapd has a cpuset context that includes the whole machine. VM
    throttling will only work during synchrononous reclaim and not from
    kswapd.

    Signed-off-by: Christoph Lameter <cl@linux-foundation.org>
    Cc: Nick Piggin <npiggin@suse.de>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Paul Menage <menage@google.com>
    Cc: Derek Fults <dfults@sgi.com>
    Signed-off-by: David Rientjes <rientjes@google.com>
    ---
    include/linux/writeback.h | 2 +-
    mm/page-writeback.c | 4 ++--
    mm/vmscan.c | 2 +-
    3 files changed, 4 insertions(+), 4 deletions(-)

    diff --git a/include/linux/writeback.h b/include/linux/writeback.h
    --- a/include/linux/writeback.h
    +++ b/include/linux/writeback.h
    @@ -114,7 +114,7 @@ static inline void inode_sync_wait(struct inode *inode)
    int wakeup_pdflush(long nr_pages, nodemask_t *nodes);
    void laptop_io_completion(void);
    void laptop_sync_completion(void);
    -void throttle_vm_writeout(gfp_t gfp_mask);
    +void throttle_vm_writeout(nodemask_t *nodes, gfp_t gfp_mask);

    /* These are exported to sysctl. */
    extern int dirty_background_ratio;
    diff --git a/mm/page-writeback.c b/mm/page-writeback.c
    --- a/mm/page-writeback.c
    +++ b/mm/page-writeback.c
    @@ -638,12 +638,12 @@ void balance_dirty_pages_ratelimited_nr(struct address_space *mapping,
    }
    EXPORT_SYMBOL(balance_dirty_pages_ratelimited_nr);

    -void throttle_vm_writeout(gfp_t gfp_mask)
    +void throttle_vm_writeout(nodemask_t *nodes, gfp_t gfp_mask)
    {
    struct dirty_limits dl;

    for ( ; ; ) {
    - get_dirty_limits(&dl, NULL, &node_states[N_HIGH_MEMORY]);
    + get_dirty_limits(&dl, NULL, nodes);

    /*
    * Boost the allowable dirty threshold a bit for page
    diff --git a/mm/vmscan.c b/mm/vmscan.c
    --- a/mm/vmscan.c
    +++ b/mm/vmscan.c
    @@ -1466,7 +1466,7 @@ static unsigned long shrink_zone(int priority, struct zone *zone,
    else if (!scan_global_lru(sc))
    shrink_active_list(SWAP_CLUSTER_MAX, zone, sc, priority, 0);

    - throttle_vm_writeout(sc->gfp_mask);
    + throttle_vm_writeout(&cpuset_current_mems_allowed, sc->gfp_mask);
    return nr_reclaimed;
    }


    \
     
     \ /
      Last update: 2008-10-30 20:29    [W:0.023 / U:2.696 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site