lkml.org 
[lkml]   [2015]   [Sep]   [4]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [4.2, Regression] Queued spinlocks cause major XFS performance regression
On Fri, Sep 04, 2015 at 08:21:28AM -0700, Linus Torvalds wrote:
> On Fri, Sep 4, 2015 at 8:14 AM, Peter Zijlstra <peterz@infradead.org> wrote:
> >
> > The reason we chose to revert to a test-and-set is because regular fair
> > locks, like the ticket and the queue thing, have horrible behaviour
> > under vcpu preemption.
>
> Right. However, with our old ticket locks, that's what we got when you
> didn't ask for paravirt support. No?

Indeed.

> And even ignoring the "implementation was crap" issue, some people may
> well want their kernels to be "bare hardware" kernels even under a
> hypervisor. It may be a slim hypervisor that gives you all the cpus,
> or it may just be a system that is just sufficiently overprovisioned,
> so you don't get vcpu preemption in practice.

Fair enough; I had not considered the slim hypervisor case.

Should I place the virt_spin_lock() thing under CONFIG_PARAVIRT (maybe
even _SPINLOCKS) such that only paravirt enabled kernels when ran on a
hypervisor that does not support paravirt patching (HyperV, VMware,
etc..) revert to the test-and-set?

> But it would be interesting to hear if just fixing the busy-looping to
> not pound the lock with a constant stream of cmpxchg's is already
> sufficient to fix the big picture problem.

Dave replaced the cpu_relax() with a __delay(1) to match what
spinlock-debug does and that fixed things for him.

Of course, it would be good if he can try the proposed patch too.


\
 
 \ /
  Last update: 2015-09-04 17:41    [W:0.128 / U:1.544 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site