lkml.org 
[lkml]   [2015]   [Apr]   [9]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v15 16/16] unfair qspinlock: a queue based unfair lock
On Thu, Apr 09, 2015 at 09:16:24AM -0400, Rik van Riel wrote:
> On 04/09/2015 03:01 AM, Peter Zijlstra wrote:
> > On Wed, Apr 08, 2015 at 02:32:19PM -0400, Waiman Long wrote:
> >> For a virtual guest with the qspinlock patch, a simple unfair byte lock
> >> will be used if PV spinlock is not configured in or the hypervisor
> >> isn't either KVM or Xen. The byte lock works fine with small guest
> >> of just a few vCPUs. On a much larger guest, however, byte lock can
> >> have serious performance problem.
> >
> > Who cares?
>
> There are some people out there running guests with dozens
> of vCPUs. If the code exists to make those setups run better,
> is there a good reason not to use it?

Well use paravirt, !paravirt stuff sucks performance wise anyhow.

The question really is: is the added complexity worth the maintenance
burden. And I'm just not convinced !paravirt virt is a performance
critical target.

> Having said that, only KVM and Xen seem to support very
> large guests, and PV spinlock is available there.
>
> I believe both VMware and Hyperv have a 32 VCPU limit, anyway.

Don't we have Hyperv paravirt drivers? They could add support for
paravirt spinlocks too.




\
 
 \ /
  Last update: 2015-04-09 16:41    [W:0.077 / U:0.868 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site