lkml.org 
[lkml]   [2009]   [Jul]   [10]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH][RFC] Adding information of counts processes acquired how many spinlocks to schedstat

* Frederic Weisbecker <fweisbec@gmail.com> wrote:

> On Fri, Jul 10, 2009 at 03:43:07PM +0200, Ingo Molnar wrote:
> >
> > * Peter Zijlstra <a.p.zijlstra@chello.nl> wrote:
> >
> > > On Fri, 2009-07-10 at 21:45 +0900, mitake@dcl.info.waseda.ac.jp wrote:
> > > > From: Andi Kleen <andi@firstfloor.org>
> > > > Subject: Re: [PATCH][RFC] Adding information of counts processes acquired how many spinlocks to schedstat
> > > > Date: Mon, 6 Jul 2009 13:54:51 +0200
> > > >
> > > > Thank you for your replying, Peter and Andi.
> > > >
> > > > > > Maybe re-use the LOCK_CONTENDED macros for this, but I'm not sure we
> > > > > > want to go there and put code like this on the lock hot-paths for !debug
> > > > > > kernels.
> > > > >
> > > > > My concern was similar.
> > > > >
> > > > > I suspect it would be in theory ok for the slow spinning path, but I am
> > > > > somewhat concerned about the additional cache miss for checking
> > > > > the global flag even in this case. This could hurt when
> > > > > the kernel is running fully cache hold, in that the cache miss
> > > > > might be far more expensive that short spin.
> > > >
> > > > Yes, there will be overhead. This is certain.
> > > > But there's the radical way to ignore this,
> > > > adding subcategory to Kconfig for measuring spinlocks and #ifdef to spinlock.c.
> > > > So people who wants to avoid this overhead can disable measurement of spinlocks completely.
> > > >
> > > > And there's another way to avoid the overhead of measurement.
> > > > Making _spin_lock variable of function pointer. When you don't
> > > > want to measure spinlocks, assign _spin_lock_raw() which is
> > > > equals to current _spin_lock(). When you want to measure
> > > > spinlocks, assign _spin_lock_perf() which locks and measures.
> > > > This way will banish the cache miss problem you said. I think
> > > > this may be useful for avoiding problem of recursion.
> > >
> > > We already have that, its called CONFIG_LOCKDEP &&
> > > CONFIG_EVENT_TRACING && CONFIG_EVENT_PROFILE, with those enabled
> > > you get tracepoints on every lock acquire and lock release, and
> > > perf can already use those as event sources.
> >
> > Yes, that could be reused for this facility too.
> >
> > Ingo
>
>
> I wonder if the lock_*() events should become independant from
> lockdep so that we don't need to always enable lockdep to get the
> lock events at the same time.
>
> It could be a separate option.

They already should be to a large degree if CONFIG_LOCK_STAT is
enabled but CONFIG_PROVE_LOCKING is off. In theory :-)

Ingo


\
 
 \ /
  Last update: 2009-07-10 15:53    [W:0.053 / U:0.528 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site