lkml.org 
[lkml]   [2001]   [Dec]   [5]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: Scheduler Cleanup
    On Wed, 5 Dec 2001, Mike Kravetz wrote:

    > On Wed, Dec 05, 2001 at 03:44:42PM -0800, Davide Libenzi wrote:
    > > Anyway me too have verified an increased latency with sched_yield() test
    > > and next days I'm going to try the real one with the cycle counter
    > > sampler.
    > > I've a suspect, but i've to see the disassembly of schedule() before
    > > talking :)
    >
    > One thing to note is that possible acquisition of the runqueue
    > lock was reintroduced in sys_sched_yield(). From looking at
    > the code, it seems the purpose was to ?add fairness? in the case
    > of multiple yielders. Is that correct Ingo?

    Yep, suppose you've three running tasks A, B and C ( in that order ) and
    suppose A and B are yield()ing.
    You get:

    A B A B A B A B A B A ...

    until the priority of A or B drops, then C has a chance to execute.
    Since with the new counter decay code priority remains up until a sudden
    drop, why don't we decrease counter by K ( 1? ) for each sched_yield() call ?
    In this way we can easily avoid the above pattern in case of multiple
    yield()ers.



    PS: sched_yield() code is very important because it is used inside the
    pthread spinlock() wait path for a bunch of times before falling into
    usleep().



    - Davide



    -
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

    \
     
     \ /
      Last update: 2005-03-22 13:13    [W:0.031 / U:0.220 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site