lkml.org 
[lkml]   [2006]   [Jun]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    Date
    From
    Subject[PATCH 19/21] swap_prefetch: Conversion of nr_unstable to ZVC
    Subject: swap_prefetch: conversion of nr_unstable to per zone counter
    From: Christoph Lameter <clameter@sgi.com>

    The determination of the vm state is now not that expensive
    anymore after we remove the use of the page state.
    Remove the logic to avoid the expensive checks.

    Signed-off-by: Christoph Lameter <clameter@sgi.com>
    Signed-off-by: Andrew Morton <akpm@osdl.org>

    Index: linux-2.6.17-rc6-cl/mm/swap_prefetch.c
    ===================================================================
    --- linux-2.6.17-rc6-cl.orig/mm/swap_prefetch.c 2006-06-12 12:55:39.368358636 -0700
    +++ linux-2.6.17-rc6-cl/mm/swap_prefetch.c 2006-06-12 12:55:41.321362692 -0700
    @@ -298,7 +298,7 @@ static int prefetch_suitable(void)
    {
    unsigned long limit;
    struct zone *z;
    - int node, ret = 0, test_pagestate = 0;
    + int node, ret = 0;

    /* Purposefully racy */
    if (test_bit(0, &swapped.busy)) {
    @@ -306,19 +306,6 @@ static int prefetch_suitable(void)
    goto out;
    }

    - /*
    - * get_page_state and above_background_load are expensive so we only
    - * perform them every SWAP_CLUSTER_MAX prefetched_pages.
    - * We test to see if we're above_background_load as disk activity
    - * even at low priority can cause interrupt induced scheduling
    - * latencies.
    - */
    - if (!(sp_stat.prefetched_pages % SWAP_CLUSTER_MAX)) {
    - if (above_background_load())
    - goto out;
    - test_pagestate = 1;
    - }
    -
    clear_current_prefetch_free();

    /*
    @@ -357,7 +344,6 @@ static int prefetch_suitable(void)
    */
    for_each_node_mask(node, sp_stat.prefetch_nodes) {
    struct node_stats *ns = &sp_stat.node[node];
    - struct page_state ps;

    /*
    * We check to see that pages are not being allocated
    @@ -375,11 +361,6 @@ static int prefetch_suitable(void)
    } else
    ns->last_free = ns->current_free;

    - if (!test_pagestate)
    - continue;
    -
    - get_page_state_node(&ps, node);
    -
    /* We shouldn't prefetch when we are doing writeback */
    if (node_page_state(node, NR_WRITEBACK)) {
    node_clear(node, sp_stat.prefetch_nodes);
    @@ -394,7 +375,8 @@ static int prefetch_suitable(void)
    node_page_state(node, NR_ANON) +
    node_page_state(node, NR_SLAB) +
    node_page_state(node, NR_DIRTY) +
    - ps.nr_unstable + total_swapcache_pages;
    + node_page_state(node, NR_UNSTABLE) +
    + total_swapcache_pages;
    if (limit > ns->prefetch_watermark) {
    node_clear(node, sp_stat.prefetch_nodes);
    continue;
    -
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/
    \
     
     \ /
      Last update: 2006-06-12 23:20    [W:0.021 / U:1.836 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site