[lkml]   [2004]   [Jul]   [26]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
    SubjectRe: [patch] voluntary-preempt-2.6.8-rc2-J3
    Ingo Molnar <> wrote:

    The bigger this thing gets, the more worried I get. Sometime this is going
    to need to be split up into individual fixes, and they need to be based
    upon an overall approach which we haven't yet settled on.

    In particular your whole approach (with voluntary_need_resched()) doesn't
    work on SMP.

    The approach I'm using is to unconditionally drop locks on every Nth pass
    around the loop to allow another CPU to grab the lock, do some work, drop
    the lock, then be preempted. eg:

    @@ -773,6 +774,12 @@ int get_user_pages(struct task_struct *t
    struct page *map = NULL;
    int lookup_write = write;

    + if ((++nr_pages & 63) == 0) {
    + spin_unlock(&mm->page_table_lock);
    + cpu_relax();
    + spin_lock(&mm->page_table_lock);
    + }
    * We don't follow pagetables for VM_IO regions - they

    This is a bit nasty, because it assumes that the other CPU will be able to
    get in there and acquire the lock which is in a different cpu's cache.
    Long transfer latencies may defeat this.

    For voluntary preempt the above will need a cond_resched added to it.

    So we need to come up with a suitable approach to covering voluntary and
    involuntary preemption on both UP and SMP and set up the header file
    infrastructure to support it all. After that we need to redo all the
    points-of-high-latency whcih have thus far been discovered to use that

    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to
    More majordomo info at
    Please read the FAQ at

     \ /
      Last update: 2005-03-22 14:04    [W:0.024 / U:11.588 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site