lkml.org 
[lkml]   [2010]   [Apr]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: VM performance issue in KVM guests.
On 04/15/2010 07:58 AM, Srivatsa Vaddagiri wrote:
> On Sun, Apr 11, 2010 at 11:40 PM, Avi Kivity <avi@redhat.com
> <mailto:avi@redhat.com>> wrote:
>
> The current handing of PLE is very suboptimal. With proper
> directed yield we should be much better there.
>
>
>
> Hi Avi,
> By directed yield, do you mean transfer the timeslice of
> one thread (which is contending for a lock) to another thread (which
> is holding a lock)?

It's a priority transfer (in CFS terms, vruntime) (we don't know who
holds the lock, so we pick a co-vcpu at random).

> If at that point in time, the lock-holder thread/VCPU is actually not
> running currently, ie it is at the back of the runqueue, would it help
> much? In such case, it will take time for the lock holder to run again
> and the default timeslice it would have got could have been sufficient
> to release the lock?

The idea is to increase the chances to the target vcpu to run, and to
decrease the changes of the spinner to run (hopefully they change places).

>
> I am also working on a prototype for some other technique here - to
> avoid preempting guest threads/VCPUs in the middle of their
> (spin-lock) critical section. This requires guest to hint host when
> there are in such a section. [1] has shown 33% improvement to an
> apache benchmark based on this idea.
>

Certainly that has even greater potential for Linux guests. Note that
we spin on mutexes now, so we need to prevent preemption while the lock
owner is running.


--
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.



\
 
 \ /
  Last update: 2010-04-15 10:21    [W:0.098 / U:0.600 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site