lkml.org 
[lkml]   [2010]   [Sep]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] perf, x86: catch spurious interrupts after disabling counters
On 23.09.10 23:18:34, Don Zickus wrote:

> > I was able to duplicate the problem and can confirm this patch fixes the
> > issue for me. I tried poking around (similar to things Robert probably
> > did) and had no luck. Something just doesn't make sense, but I guess for
> > now this patch is good enough for me. :-)
>
> Ah ha! I figured out what the problem was, need to disable the pmu while
> processing the nmi. :-) Finally something simple in this crazy unknown
> NMI spree.
>
> Oh yeah and trace_printk is now my new favorite tool!
>
> From: Don Zickus <dzickus@redhat.com>
> Date: Thu, 23 Sep 2010 22:52:09 -0400
> Subject: [PATCH] x86, perf: disable pmu from counting when processing irq
>
> On certain AMD and Intel machines, the pmu was left enabled
> while the counters were reset during handling of the NMI.
> After the counters are reset, code continues to process an
> overflow. During this time another counter overflow interrupt
> could happen because the counter is still ticking. This leads to
> an unknown NMI.

I don't like the approach of disabling all counters in the nmi
handler. First, it stops counting and thus may falsify
measurement. Second, it introduces much overhead doing a rd-/wrmsrl()
for each counter.

Does your patch below solve something that my patch doesn't?

Btw, Ingo, my patch should be applied to tip/perf/urgent as it fixes
the regression you discovered on AMD systems.

Thanks,

-Robert

>
> static int x86_pmu_handle_irq(struct pt_regs *regs)
> {
>
> <snip..>
>
> for (idx = 0; idx < x86_pmu.num_counters; idx++) {
> if (!test_bit(idx, cpuc->active_mask))
> continue;
>
> <snip..>
>
> counter reset--> if (!x86_perf_event_set_period(event))
> continue;
>
> still ticking--> if (perf_event_overflow(event, 1, &data, regs))
> <boom overflow>
> stopped here --> x86_pmu_stop(event);
>
> The way to solve this is to disable the pmu while processing the
> overflows and re-enable when done.
>
> Signed-off-by: Don Zickus <dzickus@redhat.com>
> ---
> arch/x86/kernel/cpu/perf_event.c | 4 ++++
> 1 files changed, 4 insertions(+), 0 deletions(-)
>
> diff --git a/arch/x86/kernel/cpu/perf_event.c b/arch/x86/kernel/cpu/perf_event.c
> index 48c6d8d..d4fe95d 100644
> --- a/arch/x86/kernel/cpu/perf_event.c
> +++ b/arch/x86/kernel/cpu/perf_event.c
> @@ -1158,6 +1158,8 @@ static int x86_pmu_handle_irq(struct pt_regs *regs)
>
> cpuc = &__get_cpu_var(cpu_hw_events);
>
> + x86_pmu_disable_all();
> +
> for (idx = 0; idx < x86_pmu.num_counters; idx++) {
> if (!test_bit(idx, cpuc->active_mask))
> continue;
> @@ -1182,6 +1184,8 @@ static int x86_pmu_handle_irq(struct pt_regs *regs)
> x86_pmu_stop(event, 0);
> }
>
> + x86_pmu_enable_all(0);
> +
> if (handled)
> inc_irq_stat(apic_perf_irqs);
>
> --
> 1.7.2.3
>
>

--
Advanced Micro Devices, Inc.
Operating System Research Center



\
 
 \ /
  Last update: 2010-09-27 20:09    [W:3.398 / U:0.500 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site