lkml.org 
[lkml]   [2010]   [Sep]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCH] perf, x86: catch spurious interrupts after disabling counters
From
Robert.

Does it mean that with this patch, we don't need Don's back-to-back NMI patch
anymore?


On Wed, Sep 15, 2010 at 6:20 PM, Robert Richter <robert.richter@amd.com> wrote:
> On 14.09.10 19:41:32, Robert Richter wrote:
>> I found the reason why we get the unknown nmi. For some reason
>> cpuc->active_mask in x86_pmu_handle_irq() is zero. Thus, no counters
>> are handled when we get an nmi. It seems there is somewhere a race
>> accessing the active_mask. So far I don't have a fix available.
>> Changing x86_pmu_stop() did not help:
>
> The patch below for tip/perf/urgent fixes this.
>
> -Robert
>
> From 4206a086f5b37efc1b4d94f1d90b55802b299ca0 Mon Sep 17 00:00:00 2001
> From: Robert Richter <robert.richter@amd.com>
> Date: Wed, 15 Sep 2010 16:12:59 +0200
> Subject: [PATCH] perf, x86: catch spurious interrupts after disabling counters
>
> Some cpus still deliver spurious interrupts after disabling a counter.
> This caused 'undelivered NMI' messages. This patch fixes this.
>
> Signed-off-by: Robert Richter <robert.richter@amd.com>
> ---
>  arch/x86/kernel/cpu/perf_event.c |   13 ++++++++++++-
>  1 files changed, 12 insertions(+), 1 deletions(-)
>
> diff --git a/arch/x86/kernel/cpu/perf_event.c b/arch/x86/kernel/cpu/perf_event.c
> index 3efdf28..df7aabd 100644
> --- a/arch/x86/kernel/cpu/perf_event.c
> +++ b/arch/x86/kernel/cpu/perf_event.c
> @@ -102,6 +102,7 @@ struct cpu_hw_events {
>         */
>        struct perf_event       *events[X86_PMC_IDX_MAX]; /* in counter order */
>        unsigned long           active_mask[BITS_TO_LONGS(X86_PMC_IDX_MAX)];
> +       unsigned long           running[BITS_TO_LONGS(X86_PMC_IDX_MAX)];
>        int                     enabled;
>
>        int                     n_events;
> @@ -1010,6 +1011,7 @@ static int x86_pmu_start(struct perf_event *event)
>        x86_perf_event_set_period(event);
>        cpuc->events[idx] = event;
>        __set_bit(idx, cpuc->active_mask);
> +       __set_bit(idx, cpuc->running);
>        x86_pmu.enable(event);
>        perf_event_update_userpage(event);
>
> @@ -1141,8 +1143,17 @@ static int x86_pmu_handle_irq(struct pt_regs *regs)
>        cpuc = &__get_cpu_var(cpu_hw_events);
>
>        for (idx = 0; idx < x86_pmu.num_counters; idx++) {
> -               if (!test_bit(idx, cpuc->active_mask))
> +               if (!test_bit(idx, cpuc->active_mask)) {
> +                       if (__test_and_clear_bit(idx, cpuc->running))
> +                               /*
> +                                * Though we deactivated the counter
> +                                * some cpus might still deliver
> +                                * spurious interrupts. Catching them
> +                                * here.
> +                                */
> +                               handled++;
>                        continue;
> +               }
>
>                event = cpuc->events[idx];
>                hwc = &event->hw;
> --
> 1.7.2.2
>
>
>
>
>
> --
> Advanced Micro Devices, Inc.
> Operating System Research Center
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2010-09-17 15:09    [W:1.403 / U:0.008 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site