lkml.org 
[lkml]   [2015]   [Apr]   [9]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v15 16/16] unfair qspinlock: a queue based unfair lock
On 04/09/2015 10:13 AM, Peter Zijlstra wrote:
> On Thu, Apr 09, 2015 at 09:16:24AM -0400, Rik van Riel wrote:
>> On 04/09/2015 03:01 AM, Peter Zijlstra wrote:
>>> On Wed, Apr 08, 2015 at 02:32:19PM -0400, Waiman Long wrote:
>>>> For a virtual guest with the qspinlock patch, a simple unfair byte lock
>>>> will be used if PV spinlock is not configured in or the hypervisor
>>>> isn't either KVM or Xen. The byte lock works fine with small guest
>>>> of just a few vCPUs. On a much larger guest, however, byte lock can
>>>> have serious performance problem.
>>> Who cares?
>> There are some people out there running guests with dozens
>> of vCPUs. If the code exists to make those setups run better,
>> is there a good reason not to use it?
> Well use paravirt, !paravirt stuff sucks performance wise anyhow.
>
> The question really is: is the added complexity worth the maintenance
> burden. And I'm just not convinced !paravirt virt is a performance
> critical target.

I am just thinking that the unfair qspinlock is better performing than
the simple byte lock. However, my current priority is to get native and
PV qspinlock upstream. The unfair qspinlock can certainly wait.

Cheers,
Longman


\
 
 \ /
  Last update: 2015-04-10 00:21    [W:0.047 / U:1.612 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site