lkml.org 
[lkml]   [2005]   [Dec]   [7]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    Date
    From
    Subject[PATCH] swap migration: Fix lru drain
    isolate_page() currently uses an IPI to notify other processors that the lru
    caches need to be drained if the page cannot be found on the LRU. The IPI
    interrupt may interrupt a processor that is just processing lru requests
    and cause a race condition.

    This patch introduces a new function run_on_each_cpu() that uses the keventd()
    to run the LRU draining on each processor. Processors disable preemption
    when dealing the LRU caches (these are per processor) and thus executing
    LRU draining from another process is safe.

    Thanks to Lee Schermerhorn <lee.schermerhorn@hp.com> for finding this race
    condition.

    This makes the

    preserve-irq-status-in-release_pages-__pagevec_lru_add.patch

    in Andrews tree no longer necessary.

    Signed-off-by: Christoph Lameter <clameter@sgi.com>

    Index: linux-2.6.15-rc5-mm1/mm/vmscan.c
    ===================================================================
    --- linux-2.6.15-rc5-mm1.orig/mm/vmscan.c 2005-12-05 11:32:21.000000000 -0800
    +++ linux-2.6.15-rc5-mm1/mm/vmscan.c 2005-12-05 13:02:27.000000000 -0800
    @@ -1038,7 +1038,7 @@ redo:
    * Maybe this page is still waiting for a cpu to drain it
    * from one of the lru lists?
    */
    - on_each_cpu(lru_add_drain_per_cpu, NULL, 0, 1);
    + schedule_on_each_cpu(lru_add_drain_per_cpu, NULL);
    if (PageLRU(page))
    goto redo;
    }
    Index: linux-2.6.15-rc5-mm1/include/linux/workqueue.h
    ===================================================================
    --- linux-2.6.15-rc5-mm1.orig/include/linux/workqueue.h 2005-12-03 21:10:42.000000000 -0800
    +++ linux-2.6.15-rc5-mm1/include/linux/workqueue.h 2005-12-05 13:02:07.000000000 -0800
    @@ -65,6 +65,7 @@ extern int FASTCALL(schedule_work(struct
    extern int FASTCALL(schedule_delayed_work(struct work_struct *work, unsigned long delay));

    extern int schedule_delayed_work_on(int cpu, struct work_struct *work, unsigned long delay);
    +extern void schedule_on_each_cpu(void (*func)(void *info), void *info);
    extern void flush_scheduled_work(void);
    extern int current_is_keventd(void);
    extern int keventd_up(void);
    Index: linux-2.6.15-rc5-mm1/kernel/workqueue.c
    ===================================================================
    --- linux-2.6.15-rc5-mm1.orig/kernel/workqueue.c 2005-12-05 11:15:24.000000000 -0800
    +++ linux-2.6.15-rc5-mm1/kernel/workqueue.c 2005-12-06 17:50:44.000000000 -0800
    @@ -424,6 +424,19 @@ int schedule_delayed_work_on(int cpu,
    return ret;
    }

    +void schedule_on_each_cpu(void (*func) (void *info), void *info)
    +{
    + int cpu;
    + struct work_struct * work = kmalloc(NR_CPUS * sizeof(struct work_struct), GFP_KERNEL);
    +
    + for_each_online_cpu(cpu) {
    + INIT_WORK(work + cpu, func, info);
    + __queue_work(per_cpu_ptr(keventd_wq->cpu_wq, cpu), work + cpu);
    + }
    + flush_workqueue(keventd_wq);
    + kfree(work);
    +}
    +
    void flush_scheduled_work(void)
    {
    flush_workqueue(keventd_wq);
    -
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/
    \
     
     \ /
      Last update: 2005-12-07 22:59    [W:0.022 / U:98.576 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site