lkml.org 
[lkml]   [2012]   [Sep]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH RFC 0/2] kvm: Improving undercommit,overcommit scenarios in PLE handler
On 09/24/2012 06:06 PM, Peter Zijlstra wrote:
> On Mon, 2012-09-24 at 17:22 +0530, Raghavendra K T wrote:
>> On 09/24/2012 05:04 PM, Peter Zijlstra wrote:
>>> On Fri, 2012-09-21 at 17:29 +0530, Raghavendra K T wrote:
>>>> In some special scenarios like #vcpu<= #pcpu, PLE handler may
>>>> prove very costly, because there is no need to iterate over vcpus
>>>> and do unsuccessful yield_to burning CPU.
>>>
>>> What's the costly thing? The vm-exit, the yield (which should be a nop
>>> if its the only task there) or something else entirely?
>>>
>> Both vmexit and yield_to() actually,
>>
>> because unsuccessful yield_to() overall is costly in PLE handler.
>>
>> This is because when we have large guests, say 32/16 vcpus, and one
>> vcpu is holding lock, rest of the vcpus waiting for the lock, when they
>> do PL-exit, each of the vcpu try to iterate over rest of vcpu list in
>> the VM and try to do directed yield (unsuccessful). (O(n^2) tries).
>>
>> this results is fairly high amount of cpu burning and double run queue
>> lock contention.
>>
>> (if they were spinning probably lock progress would have been faster).
>> As Avi/Chegu Vinod had felt it is better to avoid vmexit itself, which
>> seems little complex to achieve currently.
>
> OK, so the vmexit stays and we need to improve yield_to.
>
> How about something like the below, that would allow breaking out of the
> for-each-vcpu loop and simply going back into the vm, right?
>
> ---
> kernel/sched/core.c | 25 +++++++++++++++++++------
> 1 file changed, 19 insertions(+), 6 deletions(-)
>
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index b38f00e..5d5b355 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -4272,7 +4272,10 @@ EXPORT_SYMBOL(yield);
> * It's the caller's job to ensure that the target task struct
> * can't go away on us before we can do any checks.
> *
> - * Returns true if we indeed boosted the target task.
> + * Returns:
> + * true (>0) if we indeed boosted the target task.
> + * false (0) if we failed to boost the target.
> + * -ESRCH if there's no task to yield to.
> */
> bool __sched yield_to(struct task_struct *p, bool preempt)
> {
> @@ -4284,6 +4287,15 @@ bool __sched yield_to(struct task_struct *p, bool preempt)
> local_irq_save(flags);
> rq = this_rq();
>
> + /*
> + * If we're the only runnable task on the rq, there's absolutely no
> + * point in yielding.
> + */
> + if (rq->nr_running == 1) {
> + yielded = -ESRCH;
> + goto out_irq;
> + }
> +
> again:
> p_rq = task_rq(p);
> double_rq_lock(rq, p_rq);
> @@ -4293,13 +4305,13 @@ bool __sched yield_to(struct task_struct *p, bool preempt)
> }
>
> if (!curr->sched_class->yield_to_task)
> - goto out;
> + goto out_unlock;
>
> if (curr->sched_class != p->sched_class)
> - goto out;
> + goto out_unlock;
>
> if (task_running(p_rq, p) || p->state)
> - goto out;
> + goto out_unlock;
>
> yielded = curr->sched_class->yield_to_task(rq, p, preempt);
> if (yielded) {
> @@ -4312,11 +4324,12 @@ bool __sched yield_to(struct task_struct *p, bool preempt)
> resched_task(p_rq->curr);
> }
>
> -out:
> +out_unlock:
> double_rq_unlock(rq, p_rq);
> +out_irq:
> local_irq_restore(flags);
>
> - if (yielded)
> + if (yielded> 0)
> schedule();
>
> return yielded;
>
>

Yes, I think this is a nice idea. Any future users of yield_to
also would benefit from this. we will have to iterate only till first
attempt to yield_to.

I 'll run the test with this patch.

However Rik had a genuine concern in the cases where runqueue is not
equally distributed and lockholder might actually be on a different run
queue but not running.

Do you think instead of using rq->nr_running, we could get a global
sense of load using avenrun (something like avenrun/num_onlinecpus)



\
 
 \ /
  Last update: 2012-09-24 16:21    [W:0.185 / U:0.212 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site