lkml.org 
[lkml]   [2009]   [May]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH RFC] perf_counter: Don't swap contexts containing locked mutex

* Peter Zijlstra <peterz@infradead.org> wrote:

> On Fri, 2009-05-29 at 10:13 +0200, Peter Zijlstra wrote:
> > On Fri, 2009-05-29 at 10:06 +0200, Peter Zijlstra wrote:
> >
> > > static struct perf_counter_ctx *pin_ctx(struct perf_counter *counter, u64 *old_gen)
> > > {
> > > struct perf_counter_context *ctx;
> > > unsigned long flags;
> > >
> > > rcu_read_lock();
> > > retry:
> > > ctx = rcu_dereference(counter->ctx);
> > > spin_lock_irqsave(&ctx->lock, flags);
> > > if (ctx != rcu_dereference(counter->ctx))
> > > goto retry;
> > >
> > > *old_gen = ctx->generation;
> > > ctx->generation = ~0ULL;
> > > spin_unlock_irqrestore(&ctx->lock, flags);
> > > rcu_read_unlock();
> > >
> > > return ctx;
> > > }
> > >
> > > static void unpin_ctx(struct perf_counter_ctx *ctx, u64 old_gen)
> > > {
> > > unsigned long flags;
> > >
> > > spin_lock_irqsave(&ctx->lock, flags);
> > > ctx->generation = old_gen;
> > > spin_unlock_irqrestore(&ctx->lock, flags);
> > > }
> >
> > OK, I think I got this wrong, counter->ctx isn't the problem.
> > task->perf_counter_ctx is.
> >
> > Still would be nice to write it in the above form. I'll go over the code
> > again to see who else might want it.
>
> OK, I went over the code, and your patch does indeed cover the few
> spots we need. It was just my brain going haywire and auditing the
> wrong pattern.

ok - i'll try this with my 'perf stat make -j' workload that quickly
locks up on a Nehalem. (bug introduced by the context switch
optimizations)

Ingo


\
 
 \ /
  Last update: 2009-05-29 11:03    [W:0.068 / U:0.112 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site