lkml.org 
[lkml]   [2014]   [Mar]   [11]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [RFC][PATCH 0/7] locking: qspinlock

* Peter Zijlstra <peterz@infradead.org> wrote:

> On Tue, Mar 11, 2014 at 11:45:03AM +0100, Ingo Molnar wrote:
> >
> > * Peter Zijlstra <peterz@infradead.org> wrote:
> >
> > > Hi Waiman,
> > >
> > > I promised you this series a number of days ago; sorry for the delay
> > > I've been somewhat unwell :/
> > >
> > > That said, these few patches start with a (hopefully) simple and
> > > correct form of the queue spinlock, and then gradually build upon
> > > it, explaining each optimization as we go.
> > >
> > > Having these optimizations as separate patches helps twofold;
> > > firstly it makes one aware of which exact optimizations were done,
> > > and secondly it allows one to proove or disprove any one step;
> > > seeing how they should be mostly identity transforms.
> > >
> > > The resulting code is near to what you posted I think; however it
> > > has one atomic op less in the pending wait-acquire case for NR_CPUS
> > > != huge. It also doesn't do lock stealing; its still perfectly fair
> > > afaict.
> > >
> > > Have I missed any tricks from your code?
> >
> > Waiman, you indicated in the other thread that these look good to
> > you, right? If so then I can queue them up so that they form a
> > base for further work.
>
> Ah, no that was on the qrwlock; I think we managed to cross wires
> somewhere.

Oops, too many q-locks ;-)

> I've got this entire pile waiting for something:
>
> lkml.kernel.org/r/20140210195820.834693028@infradead.org
>
> That's 5 mutex patches and the 2 qrwlock patches. Not sure what to
> do with them. To merge or not, that is the question.

Can merge them in tip:core/locking if there's no objections.

Thanks,

Ingo


\
 
 \ /
  Last update: 2014-03-12 09:21    [W:0.159 / U:0.100 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site