lkml.org 
[lkml]   [2012]   [Sep]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCH RFC 0/2] kvm: Improving undercommit,overcommit scenarios in PLE handler
    On 09/24/2012 03:54 PM, Peter Zijlstra wrote:
    > On Mon, 2012-09-24 at 18:59 +0530, Raghavendra K T wrote:
    >> However Rik had a genuine concern in the cases where runqueue is not
    >> equally distributed and lockholder might actually be on a different run
    >> queue but not running.
    >
    > Load should eventually get distributed equally -- that's what the
    > load-balancer is for -- so this is a temporary situation.

    What's the expected latency? This is the whole problem. Eventually the
    scheduler would pick the lock holder as well, the problem is that it's
    in the millisecond scale while lock hold times are in the microsecond
    scale, leading to a 1000x slowdown.

    If we want to yield, we really want to boost someone.

    > We already try and favour the non running vcpu in this case, that's what
    > yield_to_task_fair() is about. If its still not eligible to run, tough
    > luck.

    Crazy idea: instead of yielding, just run that other vcpu in the thread
    that would otherwise spin. I can see about a million objections to this
    already though.

    >> Do you think instead of using rq->nr_running, we could get a global
    >> sense of load using avenrun (something like avenrun/num_onlinecpus)
    >
    > To what purpose? Also, global stuff is expensive, so you should try and
    > stay away from it as hard as you possibly can.

    Spinning is also expensive. How about we do the global stuff every N
    times, to amortize the cost (and reduce contention)?

    --
    error compiling committee.c: too many arguments to function


    \
     
     \ /
      Last update: 2012-09-24 18:21    [W:2.197 / U:0.012 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site