lkml.org 
[lkml]   [2009]   [Aug]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRE: [PATCH] perf_counter: Fix a race on perf_counter_ctx
From
Date
On Tue, 2009-08-18 at 14:20 +0100, Metzger, Markus T wrote:
> >-----Original Message-----
> >From: Peter Zijlstra [mailto:peterz@infradead.org]
> >Sent: Tuesday, August 18, 2009 3:00 PM
> >To: Metzger, Markus T
> >Cc: Ingo Molnar; tglx@linutronix.de; hpa@zytor.com; markus.t.metzger@gmail..com; linux-
> >kernel@vger.kernel.org; Paul Mackerras
> >Subject: RE: [PATCH] perf_counter: Fix a race on perf_counter_ctx
> >
> >On Tue, 2009-08-18 at 14:59 +0200, Peter Zijlstra wrote:
> >> On Tue, 2009-08-18 at 13:49 +0100, Metzger, Markus T wrote:
> >> > Hi Ingo, Peter,
> >> >
> >> > Did you say that branch tracing is working for you?
> >> >
> >> > On my system, the kernel hangs.
> >> >
> >> > Could it be that it simply takes too long to copy the trace? When I set the number
> >> > of samples to 10, everything seems to work OK. When I increase that number to 1000,
> >> > the kernel is getting very slow and eventually hangs.
> >> >
> >> > I get a message "hrtimer: interrupt too slow", and I get a soft lockup bug. The rest
> >> > of the message log seems pretty garbled.
> >
> >How many NMI/s is this generating anyway?
>
> One every 800 or so branches in the current configuration - which results in 800 plus
> a few perf_counter_output() calls per interrupt.

Right, that's terribly expensive. It might be worth it to specialize
that.

int perf_bts_entry_size(struct perf_counter *counter)
{
u64 sample_type = counter->attr.sample_type;
int size = sizeof(perf_event_header);

if (sample_type & PERF_SAMPLE_IP)
size += sizeof(u64);

...

/* maybe disallow PERF_SAMPLE_CALLCHAIN/RAW and grouping
on BTS counters */

return size;
}

void perf_bts_output(struct perf_counter *counter, ...)
{
int size = perf_bts_entry_size(counter);
struct perf_output_handle handle;
u64 entry[size / sizeof(u64)];
int ret;

entry[0] = (struct perf_entry_header){
.type = PERF_EVENT_SAMPLE,
.misc = 0,
.size = size,
};

... /* set all entry things */

ret = perf_output_begin(&handle, counter,
size * nr_entries, 1, 1);
if (ret)
return;

for (i = 0; i < nr_entries; i++) {
/* over-write the two that differ per entry */
entry[ip_entry] = bts_data[i].ip;
entry[add_entry] = bts_data[i].the_other_one;

perf_output_copy(&handle, entry, size);
}

perf_output_end(&handle);
}

or something like that.. would that work?


\
 
 \ /
  Last update: 2009-08-18 15:41    [W:0.084 / U:0.564 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site