lkml.org 
[lkml]   [2015]   [Sep]   [14]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v6 6/6] locking/pvqspinlock: Queue node adaptive spinning
On 09/14/2015 10:10 AM, Peter Zijlstra wrote:
> On Fri, Sep 11, 2015 at 02:37:38PM -0400, Waiman Long wrote:
>> In an overcommitted guest where some vCPUs have to be halted to make
>> forward progress in other areas, it is highly likely that a vCPU later
>> in the spinlock queue will be spinning while the ones earlier in the
>> queue would have been halted. The spinning in the later vCPUs is then
>> just a waste of precious CPU cycles because they are not going to
>> get the lock soon as the earlier ones have to be woken up and take
>> their turn to get the lock.
>>
>> This patch implements an adaptive spinning mechanism where the vCPU
>> will call pv_wait() if the following conditions are true:
>>
>> 1) the vCPU has not been halted before;
>> 2) the previous vCPU is not running.
> Why 1? For the mutex adaptive stuff we only care about the lock holder
> running, right?

The wait-early once logic was there because of the kick-ahead patch as I
don't want a recently kicked vCPU near the head of the queue to go back
to sleep too early. However, without kick-ahead, a woken up vCPU should
now be at the queue head. Indeed, we can remove that check and simplify
the logic.

BTW, the queue head vCPU at pv_wait_head_and_lock() doesn't wait early,
it will spin the full threshold as there is no way for it to figure out
if the lock holder is running or not.

Cheers,
Longman


\
 
 \ /
  Last update: 2015-09-14 22:01    [W:0.525 / U:0.100 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site