lkml.org 
[lkml]   [2002]   [Jun]   [19]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRE: Question about sched_yield()
    On 18 Jun 2002, Robert Love wrote:

    > the second bunch should _not_ receive has much CPU time as the first.
    > This has nothing to do with intelligent work or blocking or picking your
    > nose. It has everything to do with the fact that yielding means
    > "relinquish my timeslice" and "put me at the end of the runqueue".

    I have put some ideas below, I agree with "relinquish" but not end of
    queue.

    > If we are doing this, then why does the sched_yield'ing task monopolize
    > the CPU? BECAUSE IT IS BROKEN.

    Consider the case where a threaded application and a CPU hog are running.
    In sum the threaded process is also a hog, although any one thread is not.

    I find I can figure out problems like this if I start at the desired
    result and work back, and what I believe is the desired result is that the
    pure hog gets half the CPU and the threaded application, in aggregate, get
    the other. So to provide this, I would think that the desired action at
    sched_yeild() would look somewhat like this:

    If there is less than {sched overhead} on the timeslice
    add to end of queue
    do accounting to reflect the time used
    run the next thread
    else
    if there is another thread of this process
    queue this thread following the last thread of this process
    give the remaining time slice to the top thread of this process
    else
    put the yeild thread at the end of the queue
    .
    .

    One question which arises is cpu affinity, and I confess that I can't
    predict the better of process affinity or contested resource affinity (run
    the next thread now). But for uni machines, I think what I outline above
    does result in the behaviour I find most intuitive and desirable.

    On your comment about leaving the yeilded thread at the top of the queue,
    putting the yeilded thread at the end of the queue effectively forces the
    context switch time to be the sum of all the timeslices given in one pass
    through the queue. In a system with a contested resource and at least one
    hog, I think that would make the threaded task starve for CPU. As noted by
    several people, queue back at the head also can behave badly for some
    cases, although the demo program provided is probably not typical.

    I think what I suggest will avoid both problems, although it might have
    other drawbacks, like sched overhead if the queue was very long and didn't
    have other threads. Atypical, but a max "scan for another thread" count
    could be provided, or even a per-process count of runable threads in the
    queue, which is probably more overhead than just scanning for light load.
    At least under light load you can afford it.

    Your comments?

    --
    bill davidsen <davidsen@tmr.com>
    CTO, TMR Associates, Inc
    Doing interesting things with little computers since 1979.

    -
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

    \
     
     \ /
      Last update: 2005-03-22 13:26    [W:2.265 / U:0.072 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site