lkml.org 
[lkml]   [2012]   [Oct]   [4]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCH RFC 0/2] kvm: Improving undercommit,overcommit scenarios in PLE handler
    On 10/04/2012 12:56 PM, Raghavendra K T wrote:
    > On 10/03/2012 10:55 PM, Avi Kivity wrote:
    >> On 10/03/2012 04:29 PM, Raghavendra K T wrote:
    >>> * Avi Kivity <avi@redhat.com> [2012-09-27 14:03:59]:
    >>>
    >>>> On 09/27/2012 01:23 PM, Raghavendra K T wrote:
    >>>>>>
    >>> [...]
    >>>>> 2) looking at the result (comparing A & C) , I do feel we have
    >>>>> significant in iterating over vcpus (when compared to even vmexit)
    >>>>> so We still would need undercommit fix sugested by PeterZ
    >>>>> (improving by
    >>>>> 140%). ?
    >>>>
    >>>> Looking only at the current runqueue? My worry is that it misses a lot
    >>>> of cases. Maybe try the current runqueue first and then others.
    >>>>
    >>>
    >>> Okay. Do you mean we can have something like
    >>>
    >>> + if (rq->nr_running == 1 && p_rq->nr_running == 1) {
    >>> + yielded = -ESRCH;
    >>> + goto out_irq;
    >>> + }
    >>>
    >>> in the Peter's patch ?
    >>>
    >>> ( I thought lot about && or || . Both seem to have their own cons ).
    >>> But that should be only when we have short term imbalance, as PeterZ
    >>> told.
    >>
    >> I'm missing the context. What is p_rq?
    >
    > p_rq is the run queue of target vcpu.
    > What I was trying below was to address Rik concern. Suppose
    > rq of source vcpu has one task, but target probably has two task,
    > with a eligible vcpu waiting to be scheduled.
    >
    >>
    >> What I mean was:
    >>
    >> if can_yield_to_process_in_current_rq
    >> do that
    >> else if can_yield_to_process_in_other_rq
    >> do that
    >> else
    >> return -ESRCH
    >
    > I think you are saying we have to check the run queue of the
    > source vcpu, if we have a vcpu belonging to same VM and try yield to
    > that? ignoring whatever the target vcpu we received for yield_to.
    >
    > Or is it that kvm_vcpu_yield_to should now check the vcpus of same vm
    > belonging to same run queue first. If we don't succeed, go again for
    > a vcpu in different runqueue.

    Right. Prioritize vcpus that are cheap to yield to. But may return bad
    results if all vcpus on the current runqueue are spinners, so probably
    not a good idea.

    > Does it add more overhead especially in <= 1x scenario?

    The current runqueue should have just our vcpu in that case, so low
    overhead. But it's a bad idea due to the above scenario.

    --
    error compiling committee.c: too many arguments to function


    \
     
     \ /
      Last update: 2012-10-04 15:21    [W:3.325 / U:0.052 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site