lkml.org 
[lkml]   [2010]   [Feb]   [5]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH 10/11] tracing/perf: Fix lock events recursions in the fast path
From
Date
On Fri, 2010-02-05 at 10:45 +0100, Peter Zijlstra wrote:
> On Fri, 2010-02-05 at 10:38 +0800, Lai Jiangshan wrote:
> > Paul E. McKenney wrote:
> > > On Wed, Feb 03, 2010 at 10:14:34AM +0100, Frederic Weisbecker wrote:
> > >> There are rcu locked read side areas in the path where we submit
> > >> a trace events. And these rcu_read_(un)lock() trigger lock events,
> > >> which create recursive events.
> > >>
> > >> One pair in do_perf_sw_event:
> > >>
> > >> __lock_acquire
> > >> |
> > >> |--96.11%-- lock_acquire
> > >> | |
> > >> | |--27.21%-- do_perf_sw_event
> > >> | | perf_tp_event
> > >> | | |
> > >> | | |--49.62%-- ftrace_profile_lock_release
> > >> | | | lock_release
> > >> | | | |
> > >> | | | |--33.85%-- _raw_spin_unlock
> > >>
> > >> Another pair in perf_output_begin/end:
> > >>
> > >> __lock_acquire
> > >> |--23.40%-- perf_output_begin
> > >> | | __perf_event_overflow
> > >> | | perf_swevent_overflow
> > >> | | perf_swevent_add
> > >> | | perf_swevent_ctx_event
> > >> | | do_perf_sw_event
> > >> | | perf_tp_event
> > >> | | |
> > >> | | |--55.37%-- ftrace_profile_lock_acquire
> > >> | | | lock_acquire
> > >> | | | |
> > >> | | | |--37.31%-- _raw_spin_lock
> > >>
> > >> The problem is not that much the trace recursion itself, as we have a
> > >> recursion protection already (though it's always wasteful to recurse).
> > >> But the trace events are outside the lockdep recursion protection, then
> > >> each lockdep event triggers a lock trace, which will trigger two
> > >> other lockdep events. Here the recursive lock trace event won't
> > >> be taken because of the trace recursion, so the recursion stops there
> > >> but lockdep will still analyse these new events:
> > >>
> > >> To sum up, for each lockdep events we have:
> > >>
> > >> lock_*()
> > >> |
> > >> trace lock_acquire
> > >> |
> > >> ----- rcu_read_lock()
> > >> | |
> > >> | lock_acquire()
> > >> | |
> > >> | trace_lock_acquire() (stopped)
> > >> | |
> > >> | lockdep analyze
> > >> |
> > >> ----- rcu_read_unlock()
> > >> |
> > >> lock_release
> > >> |
> > >> trace_lock_release() (stopped)
> > >> |
> > >> lockdep analyze
> > >>
> > >> And you can repeat the above two times as we have two rcu read side
> > >> sections when we submit an event.
> > >>
> > >> This is fixed in this pacth by using the non-lockdep versions of
> > >> rcu_read_(un)lock.
> > >
> > > Hmmm... Perhaps I should rename __rcu_read_lock() to something more
> > > meaningful if it is to be used outside of the RCU files. In the
> > > meantime:
> > >
> > > Reviewed-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> > >
> >
> > Perhaps we can use the existed rcu_read_lock_sched_notrace().
> >
> > not relate to this patchset, but RCU & lockdep:
> >
> > We need to remove lockdep from rcu_read_lock_*().
>
> I'm not at all convinced we need to do any such thing, remember its
> debugging stuff, performance, while nice, doesn't really count.

That said, I'm not at all happy about removing lockdep annotations to
make the tracer faster, that's really counter productive.





\
 
 \ /
  Last update: 2010-02-05 10:53    [from the cache]
©2003-2014 Jasper Spaans. Advertise on this site