lkml.org 
[lkml]   [2010]   [Sep]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] perf, x86: catch spurious interrupts after disabling counters
On 16.09.10 13:34:40, Peter Zijlstra wrote:
> On Wed, 2010-09-15 at 18:20 +0200, Robert Richter wrote:
> > Some cpus still deliver spurious interrupts after disabling a counter.
> > This caused 'undelivered NMI' messages. This patch fixes this.
> >
> I tried the below and that also seems to work.. So yeah, looks like
> we're getting late NMIs.

I would rather prefer the fix I sent. This patch does a rdmsrl() with
each nmi on every inactive counter. It also changes the counter value
of all inactive counters, thus restarting a counter by only setting
the enable bit may start with an unexpected counter value (didn't look
at current implementation if this could be a problem).

It is also not possible to detect with hardware, which counter fired
the interrupt. We cannot assume a counter overflowed by just reading
the upper bit of the counter value. We must track this in software.

-Robert

>
> ---
> arch/x86/kernel/cpu/perf_event.c | 21 ++++++++++++++++++++-
> 1 files changed, 20 insertions(+), 1 deletions(-)
>
> diff --git a/arch/x86/kernel/cpu/perf_event.c b/arch/x86/kernel/cpu/perf_event.c
> index 0fb1705..9a261ac 100644
> --- a/arch/x86/kernel/cpu/perf_event.c
> +++ b/arch/x86/kernel/cpu/perf_event.c
> @@ -1145,6 +1145,22 @@ static void x86_pmu_del(struct perf_event *event, int flags)
> perf_event_update_userpage(event);
> }
>
> +static int fixup_overflow(int idx)
> +{
> + u64 val;
> +
> + rdmsrl(x86_pmu.perfctr + idx, val);
> + if (!(val & (1ULL << (x86_pmu.cntval_bits - 1)))) {
> + val = (u64)(-x86_pmu.max_period);
> + val &= x86_pmu.cntval_mask;
> + wrmsrl(x86_pmu.perfctr + idx, val);
> +
> + return 1;
> + }
> +
> + return 0;
> +}
> +
> static int x86_pmu_handle_irq(struct pt_regs *regs)
> {
> struct perf_sample_data data;
> @@ -1159,8 +1175,11 @@ static int x86_pmu_handle_irq(struct pt_regs *regs)
> cpuc = &__get_cpu_var(cpu_hw_events);
>
> for (idx = 0; idx < x86_pmu.num_counters; idx++) {
> - if (!test_bit(idx, cpuc->active_mask))
> + if (!test_bit(idx, cpuc->active_mask)) {
> + if (fixup_overflow(idx))
> + handled++;
> continue;
> + }
>
> event = cpuc->events[idx];
> hwc = &event->hw;
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>

--
Advanced Micro Devices, Inc.
Operating System Research Center



\
 
 \ /
  Last update: 2010-09-17 10:57    [W:0.282 / U:0.080 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site