lkml.org 
[lkml]   [2010]   [Dec]   [8]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [RFC PATCH 3/3] kvm: use yield_to instead of sleep in kvm_vcpu_on_spin
On 12/05/2010 07:56 AM, Avi Kivity wrote:

>> + if (vcpu == me)
>> + continue;
>> + if (vcpu->spinning)
>> + continue;
>
> You may well want to wake up a spinner. Suppose
>
> A takes a lock
> B preempts A
> B grabs a ticket, starts spinning, yields to A
> A releases lock
> A grabs ticket, starts spinning
>
> at this point, we want A to yield to B, but it won't because of this check.

That's a good point. I guess we'll have to benchmark both with
and without the vcpu->spinning logic.

>> + if (!task)
>> + continue;
>> + if (waitqueue_active(&vcpu->wq))
>> + continue;
>> + if (task->flags& PF_VCPU)
>> + continue;
>> + kvm->last_boosted_vcpu = i;
>> + yield_to(task);
>> + break;
>> + }
>
> I think a random selection algorithm will be a better fit against
> special guest behaviour.

Possibly, though I suspect we'd have to hit very heavy overcommit ratios
with very large VMs before round robin stops working.

>> - /* Sleep for 100 us, and hope lock-holder got scheduled */
>> - expires = ktime_add_ns(ktime_get(), 100000UL);
>> - schedule_hrtimeout(&expires, HRTIMER_MODE_ABS);
>> + if (first_round&& last_boosted_vcpu == kvm->last_boosted_vcpu) {
>> + /* We have not found anyone yet. */
>> + first_round = 0;
>> + goto again;
>
> Need to guarantee termination.

We do that by setting first_round to 0 :)

We at most walk N+1 VCPUs in a VM with N VCPUs, with
this patch.

--
All rights reversed


\
 
 \ /
  Last update: 2010-12-08 23:41    [W:0.217 / U:0.464 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site