lkml.org 
[lkml]   [2014]   [Jul]   [28]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH -tip/master 3/7] locking/mcs: Remove obsolete comment
From
Date
On Mon, 2014-07-28 at 09:49 -0700, Jason Low wrote:
> On Sun, 2014-07-27 at 22:18 -0700, Davidlohr Bueso wrote:
> > ... as we clearly inline mcs_spin_lock() now.
> >
> > Signed-off-by: Davidlohr Bueso <davidlohr@hp.com>
> > ---
> > kernel/locking/mcs_spinlock.h | 3 ---
> > 1 file changed, 3 deletions(-)
> >
> > diff --git a/kernel/locking/mcs_spinlock.h b/kernel/locking/mcs_spinlock.h
> > index 23e89c5..4d60986 100644
> > --- a/kernel/locking/mcs_spinlock.h
> > +++ b/kernel/locking/mcs_spinlock.h
> > @@ -56,9 +56,6 @@ do { \
> > * If the lock has already been acquired, then this will proceed to spin
> > * on this node->locked until the previous lock holder sets the node->locked
> > * in mcs_spin_unlock().
> > - *
> > - * We don't inline mcs_spin_lock() so that perf can correctly account for the
> > - * time spent in this lock function.
> > */
> > static inline
> > void mcs_spin_lock(struct mcs_spinlock **lock, struct mcs_spinlock *node)
>
> Likewise, I'm wondering if we should make this function noinline so that
> "perf can correctly account for the time spent in this lock function".

Well, it's not hard to see where the contention is when working on
locking issues with perf. With mutexes there are only two sources,
either the task is just spinning trying to get the lock, or its gone to
the slowpath, and you can see a lot of contention on the wait_lock.

So unless I'm missing something, I don't think we'd need to make this
noinline again -- although I forget why it was changed in the first
place.



\
 
 \ /
  Last update: 2014-07-28 19:41    [W:0.087 / U:0.128 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site