lkml.org 
[lkml]   [2004]   [Jul]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    SubjectRe: [patch] voluntary-preempt-2.6.8-rc2-J3
    From
    Date
    On Mon, 2004-07-26 at 16:36, Ingo Molnar wrote:
    > one alternative technique to yours would be to notify _all_ CPUs that a
    > high-prio RT task is ready to run (via a broadcast need-resched). That
    > way the UP latency-break techniques map to SMP in a 1:1 way.
    >
    > non-RT tasks dont get this benefit, which is a difference to the UP
    > situation, but i dont think it would be appropriate to use the UP
    > behavior, due to the overhead of broadcasting.
    >
    > a combination of the two techniques could be used too: a global 'break
    > locks from now on' flag which gets set if a (RT?) task wants to
    > reschedule. Normally this flag would be zero and the cacheline would be
    > clean and shared between all CPUs, causing no overhead. Once a task

    It might be possible to eliminate the need_resched flag. Here is an
    article that talk about a event-driven preemption model.

    Quoting :

    Over the long term, MontaVista is investigating whether preemption locks
    can be eliminated (or at least greatly reduced in number) by protecting
    all the short-duration critical regions with spinlocks that also disable
    interrupts on the local CPU, and the long-duration critical regions with
    mutex locks. ("Long duration" means much greater than the time taken by
    two context switches.) This will reduce the overhead of the preemptable
    kernel, since there will no longer be any need to test for preemption
    ("polling for preemption") at the end of a preemption-locked region
    (which could happen tens of thousands of times per second on an average
    system). Instead, preemption would happen automatically as part of the
    interrupt servicing that causes a higher-priority process to become
    runnable ("event-driven preemption"). Typically, this only happens a few
    times to a few tens of times per second with an average system workload,
    making the "event-driven preemption" model much more efficient than the
    "polling for preemption" model. This method also has an added efficiency
    in that the system will take advantage of the cache disruption caused by
    the interrupt (which is unavoidable) to continue with the preemption.

    Reference:

    http://www.linuxdevices.com/cgi-bin/printerfriendly.cgi?id=AT4185744181


    Best regards,

    Eric St-Laurent


    -
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

    \
     
     \ /
      Last update: 2005-03-22 14:04    [W:4.180 / U:0.424 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site