lkml.org 
[lkml]   [2015]   [Sep]   [5]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [4.2, Regression] Queued spinlocks cause major XFS performance regression
On Fri, Sep 04, 2015 at 08:58:38AM -0700, Linus Torvalds wrote:
> On Fri, Sep 4, 2015 at 8:30 AM, Peter Zijlstra <peterz@infradead.org> wrote:
> >> And even ignoring the "implementation was crap" issue, some people may
> >> well want their kernels to be "bare hardware" kernels even under a
> >> hypervisor. It may be a slim hypervisor that gives you all the cpus,
> >> or it may just be a system that is just sufficiently overprovisioned,
> >> so you don't get vcpu preemption in practice.
> >
> > Fair enough; I had not considered the slim hypervisor case.
> >
> > Should I place the virt_spin_lock() thing under CONFIG_PARAVIRT (maybe
> > even _SPINLOCKS) such that only paravirt enabled kernels when ran on a
> > hypervisor that does not support paravirt patching (HyperV, VMware,
> > etc..) revert to the test-and-set?
>
> My gut feel would be to try to match out old paravirt setup, which
> similarly replaced the ticket locks with the test-and-set lock, and
> try to match the situation where that happened?

I'm not sure there was a test-and-set option in 4.1.

Either the hypervisor layer implemented paravirt spinlocks (Xen, KVM)
(and you selected CONFIG_PARAVIRT_SPINLOCKS, which had a fairly large
negative impact on native code), or you got our native locking.

So if you want I can simply remove the whole test-and-set thing, but I'd
rather fix it and put it under one of the PARAVIRT options.



\
 
 \ /
  Last update: 2015-09-05 20:01    [W:0.069 / U:0.176 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site